Jan 21 06:35:09 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 06:35:09 crc restorecon[4716]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:09 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 06:35:10 crc restorecon[4716]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 06:35:10 crc kubenswrapper[4913]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.329821 4913 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336037 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336067 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336077 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336086 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336094 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336103 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336111 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336120 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336128 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336136 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336144 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336152 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336160 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336168 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336176 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336184 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336191 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336199 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336222 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336234 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336244 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336252 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336260 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336267 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336275 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336283 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336291 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336298 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336306 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336313 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336321 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336329 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336337 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336345 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336353 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336361 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336369 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336377 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336384 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336392 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336403 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336414 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336423 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336431 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336438 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336446 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336454 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336464 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336471 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336479 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336486 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336494 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336503 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336513 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336522 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336530 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336538 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336549 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336557 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336565 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336572 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336580 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336612 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336621 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336629 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336640 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336651 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336659 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336668 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336677 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.336685 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336852 4913 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336870 4913 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336887 4913 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336899 4913 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336911 4913 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336921 4913 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336933 4913 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336946 4913 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336956 4913 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336965 4913 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336975 4913 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336984 4913 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.336994 4913 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337003 4913 flags.go:64] FLAG: --cgroup-root="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337012 4913 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337021 4913 flags.go:64] FLAG: --client-ca-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337029 4913 flags.go:64] FLAG: --cloud-config="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337038 4913 flags.go:64] FLAG: --cloud-provider="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337048 4913 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337057 4913 flags.go:64] FLAG: --cluster-domain="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337066 4913 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337075 4913 flags.go:64] FLAG: --config-dir="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337084 4913 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337094 4913 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337106 4913 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337116 4913 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337125 4913 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337134 4913 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337143 4913 flags.go:64] FLAG: --contention-profiling="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337152 4913 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337161 4913 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337171 4913 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337179 4913 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337191 4913 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337200 4913 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337209 4913 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337217 4913 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337226 4913 flags.go:64] FLAG: --enable-server="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337239 4913 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337254 4913 flags.go:64] FLAG: --event-burst="100" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337265 4913 flags.go:64] FLAG: --event-qps="50" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337277 4913 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337288 4913 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337301 4913 flags.go:64] FLAG: --eviction-hard="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337316 4913 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337327 4913 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337339 4913 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337351 4913 flags.go:64] FLAG: --eviction-soft="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337361 4913 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337370 4913 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337385 4913 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337394 4913 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337403 4913 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337412 4913 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337422 4913 flags.go:64] FLAG: --feature-gates="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337444 4913 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337453 4913 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337462 4913 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337471 4913 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337481 4913 flags.go:64] FLAG: --healthz-port="10248" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337490 4913 flags.go:64] FLAG: --help="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337499 4913 flags.go:64] FLAG: --hostname-override="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337507 4913 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337516 4913 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337525 4913 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337534 4913 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337542 4913 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337551 4913 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337560 4913 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337569 4913 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337579 4913 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337616 4913 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337626 4913 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337635 4913 flags.go:64] FLAG: --kube-reserved="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337646 4913 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337655 4913 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337665 4913 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337674 4913 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337683 4913 flags.go:64] FLAG: --lock-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337693 4913 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337703 4913 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337713 4913 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337726 4913 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337735 4913 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337744 4913 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337753 4913 flags.go:64] FLAG: --logging-format="text" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337787 4913 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337798 4913 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337807 4913 flags.go:64] FLAG: --manifest-url="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337816 4913 flags.go:64] FLAG: --manifest-url-header="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337829 4913 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337838 4913 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337849 4913 flags.go:64] FLAG: --max-pods="110" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337858 4913 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337868 4913 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337877 4913 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337886 4913 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337896 4913 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337905 4913 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337915 4913 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337935 4913 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337944 4913 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337954 4913 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337963 4913 flags.go:64] FLAG: --pod-cidr="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337972 4913 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337985 4913 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.337994 4913 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338004 4913 flags.go:64] FLAG: --pods-per-core="0" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338013 4913 flags.go:64] FLAG: --port="10250" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338022 4913 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338031 4913 flags.go:64] FLAG: --provider-id="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338040 4913 flags.go:64] FLAG: --qos-reserved="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338049 4913 flags.go:64] FLAG: --read-only-port="10255" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338058 4913 flags.go:64] FLAG: --register-node="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338090 4913 flags.go:64] FLAG: --register-schedulable="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338101 4913 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338117 4913 flags.go:64] FLAG: --registry-burst="10" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338126 4913 flags.go:64] FLAG: --registry-qps="5" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338135 4913 flags.go:64] FLAG: --reserved-cpus="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338143 4913 flags.go:64] FLAG: --reserved-memory="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338154 4913 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338164 4913 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338173 4913 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338181 4913 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338190 4913 flags.go:64] FLAG: --runonce="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338199 4913 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338208 4913 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338217 4913 flags.go:64] FLAG: --seccomp-default="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338226 4913 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338235 4913 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338245 4913 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338255 4913 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338265 4913 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338273 4913 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338282 4913 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338291 4913 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338300 4913 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338310 4913 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338319 4913 flags.go:64] FLAG: --system-cgroups="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338327 4913 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338341 4913 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338350 4913 flags.go:64] FLAG: --tls-cert-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338359 4913 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338369 4913 flags.go:64] FLAG: --tls-min-version="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338377 4913 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338386 4913 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338397 4913 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338406 4913 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338415 4913 flags.go:64] FLAG: --v="2" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338432 4913 flags.go:64] FLAG: --version="false" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338443 4913 flags.go:64] FLAG: --vmodule="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338456 4913 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.338466 4913 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338773 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338788 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338797 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338805 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338814 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338823 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338834 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338844 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338854 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338863 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338872 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338881 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338889 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338896 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338904 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338914 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338922 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338930 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338938 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338945 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338953 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338961 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338968 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338976 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338983 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.338992 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339000 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339008 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339015 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339023 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339031 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339039 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339046 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339054 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339063 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339070 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339079 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339086 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339094 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339104 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339122 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339131 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339139 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339147 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339156 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339166 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339176 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339184 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339193 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339201 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339209 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339217 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339225 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339243 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339251 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339258 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339266 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339276 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339286 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339295 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339303 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339311 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339319 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339327 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339335 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339342 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339350 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339358 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339365 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339372 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.339381 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.339657 4913 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.350448 4913 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.350511 4913 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350669 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350684 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350693 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350701 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350709 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350718 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350725 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350734 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350742 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350750 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350757 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350766 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350775 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350782 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350790 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350798 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350806 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350813 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350821 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350829 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350837 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350881 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350889 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350897 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350904 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350912 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350920 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350928 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350935 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350943 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350951 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350959 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350967 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350977 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350988 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.350998 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351012 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351022 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351031 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351039 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351049 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351057 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351065 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351073 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351081 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351089 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351096 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351104 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351112 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351119 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351129 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351139 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351149 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351157 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351166 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351174 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351181 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351189 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351197 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351205 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351215 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351225 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351233 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351242 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351251 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351259 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351267 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351275 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351283 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351291 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351300 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.351313 4913 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351539 4913 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351553 4913 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351622 4913 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351637 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351647 4913 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351656 4913 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351668 4913 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351676 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351684 4913 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351692 4913 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351699 4913 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351707 4913 feature_gate.go:330] unrecognized feature gate: Example Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351715 4913 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351723 4913 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351731 4913 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351738 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351746 4913 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351754 4913 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351762 4913 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351769 4913 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351777 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351785 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351792 4913 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351802 4913 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351813 4913 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351821 4913 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351828 4913 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351837 4913 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351845 4913 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351853 4913 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351861 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351868 4913 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351876 4913 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351884 4913 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351893 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351901 4913 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351909 4913 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351916 4913 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351924 4913 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351932 4913 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351940 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351948 4913 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351955 4913 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351963 4913 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351971 4913 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351978 4913 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351986 4913 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.351994 4913 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352002 4913 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352009 4913 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352017 4913 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352024 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352032 4913 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352040 4913 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352047 4913 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352055 4913 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352062 4913 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352070 4913 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352078 4913 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352085 4913 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352093 4913 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352103 4913 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352115 4913 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352124 4913 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352134 4913 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352144 4913 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352154 4913 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352163 4913 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352173 4913 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352184 4913 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.352193 4913 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.352206 4913 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.352444 4913 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.356481 4913 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.356674 4913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.357529 4913 server.go:997] "Starting client certificate rotation" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.357570 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.357894 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-11 16:41:31.412218781 +0000 UTC Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.358053 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.365458 4913 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.367378 4913 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.369348 4913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.379144 4913 log.go:25] "Validated CRI v1 runtime API" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.394492 4913 log.go:25] "Validated CRI v1 image API" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.396828 4913 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.400207 4913 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-06-30-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.400282 4913 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.428559 4913 manager.go:217] Machine: {Timestamp:2026-01-21 06:35:10.426708337 +0000 UTC m=+0.223068020 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7037ee30-9526-47b8-97e2-90db93aaec61 BootID:dc2e078c-6a92-4a2e-a56c-2176218bd01c Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0e:ab:27 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0e:ab:27 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1c:a7:8e Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:48:e2:5d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:78:c3:e7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:75:0a:e1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:92:18:fd:1e:14 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:c5:c3:55:a6:21 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.428963 4913 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.429181 4913 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430175 4913 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430401 4913 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430453 4913 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430810 4913 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.430822 4913 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431042 4913 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431085 4913 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431321 4913 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.431890 4913 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432681 4913 kubelet.go:418] "Attempting to sync node with API server" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432704 4913 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432735 4913 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432752 4913 kubelet.go:324] "Adding apiserver pod source" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.432775 4913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.435136 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.435209 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.435212 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.435328 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.436164 4913 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.436816 4913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.438265 4913 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439090 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439139 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439158 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439175 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439203 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439278 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439300 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439329 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439349 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439371 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439421 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439440 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.439970 4913 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.440806 4913 server.go:1280] "Started kubelet" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.441974 4913 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.441949 4913 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.442320 4913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 06:35:10 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.444536 4913 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445039 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445104 4913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445171 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:30:24.101466077 +0000 UTC Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445304 4913 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.448615 4913 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.445445 4913 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.446077 4913 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.447934 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.448958 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.449277 4913 factory.go:55] Registering systemd factory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.449313 4913 factory.go:221] Registration of the systemd container factory successfully Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.447856 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.447461 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cab7dcc8c1eba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:35:10.44075897 +0000 UTC m=+0.237118683,LastTimestamp:2026-01-21 06:35:10.44075897 +0000 UTC m=+0.237118683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.454928 4913 server.go:460] "Adding debug handlers to kubelet server" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.455767 4913 factory.go:153] Registering CRI-O factory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.455964 4913 factory.go:221] Registration of the crio container factory successfully Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.456254 4913 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.456414 4913 factory.go:103] Registering Raw factory Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.456547 4913 manager.go:1196] Started watching for new ooms in manager Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.458786 4913 manager.go:319] Starting recovery of all containers Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465132 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465201 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465218 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465231 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465245 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465258 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465271 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465283 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465300 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465313 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465331 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465344 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465359 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465373 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465388 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465400 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465413 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465425 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465438 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465452 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465470 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465485 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465499 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465516 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465534 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465549 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465662 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465687 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465706 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465724 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465744 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465762 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465780 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465796 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465814 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465831 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465849 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465867 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465884 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465901 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465918 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465936 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465956 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465977 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.465995 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466011 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466030 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466048 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466064 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466081 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466100 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466116 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466171 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466190 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466209 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466229 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466246 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466263 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466279 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466295 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466311 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466329 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466351 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466368 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466384 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466401 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.466418 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467276 4913 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467312 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467334 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467353 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467370 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467387 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467408 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467424 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467442 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467461 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467481 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467498 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467514 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467530 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467547 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467563 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467578 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467616 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467632 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467646 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467661 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467676 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467691 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467710 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467724 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467741 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467758 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467774 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467790 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467804 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467820 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467836 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467852 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467867 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467882 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467900 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467917 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467935 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467958 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467976 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.467993 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.468012 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.468048 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471221 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471301 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471353 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471378 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471421 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471443 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471473 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471491 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471519 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471537 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471555 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471581 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471622 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471652 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471667 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471712 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471736 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471752 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471779 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471797 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471823 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471853 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471869 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471887 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471915 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471932 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471959 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471974 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.471992 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472022 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472041 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472067 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472090 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472107 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472142 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472164 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472190 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472210 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472228 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472250 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472269 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472290 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472309 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472338 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472377 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472403 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472422 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472446 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472463 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472485 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472502 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472521 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472554 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472571 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472650 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472667 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472686 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472708 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472726 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472748 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472765 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472784 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472811 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472828 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472848 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472864 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472879 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472897 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472916 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472935 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472953 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472972 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.472994 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473012 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473035 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473050 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473067 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473087 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473104 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473119 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473141 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473162 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473188 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473230 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473265 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473293 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473313 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473351 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473374 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473391 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473415 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473433 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473457 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473476 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473494 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473516 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473535 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473557 4913 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473574 4913 reconstruct.go:97] "Volume reconstruction finished" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.473586 4913 reconciler.go:26] "Reconciler: start to sync state" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.495998 4913 manager.go:324] Recovery completed Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.508202 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.509822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.509892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.509905 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.510808 4913 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.510848 4913 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.510887 4913 state_mem.go:36] "Initialized new in-memory state store" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.520905 4913 policy_none.go:49] "None policy: Start" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.521658 4913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.522479 4913 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.522520 4913 state_mem.go:35] "Initializing new in-memory state store" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.525025 4913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.525083 4913 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.525110 4913 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.525167 4913 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 06:35:10 crc kubenswrapper[4913]: W0121 06:35:10.526349 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.526428 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.551651 4913 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590073 4913 manager.go:334] "Starting Device Plugin manager" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590187 4913 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590246 4913 server.go:79] "Starting device plugin registration server" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590946 4913 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.590979 4913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.591777 4913 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.591986 4913 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.592013 4913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.601866 4913 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.626515 4913 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.626780 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629477 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.629674 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630367 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630516 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.630954 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.631059 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.631253 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.631334 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632023 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632111 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632411 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632643 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632671 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632645 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632786 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.632995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.633026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.633038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.634921 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635138 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635200 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635651 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.635713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636611 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636668 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636929 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.636975 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.637066 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.637113 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.637134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.638814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.638853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.638864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.652628 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.675920 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.675979 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676012 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676035 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676058 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676078 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676101 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676163 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676259 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676329 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676354 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676378 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676445 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.676502 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.691439 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693504 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693584 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.693704 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.694405 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.777761 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.777907 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.777957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778006 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778049 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778014 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778094 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778031 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778189 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778225 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778255 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778290 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778388 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778419 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778479 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778486 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778537 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778573 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778649 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778676 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778711 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778744 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778749 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778818 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778575 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778798 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778438 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778333 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778911 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.778775 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.895014 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898091 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898115 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.898159 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: E0121 06:35:10.898743 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:10 crc kubenswrapper[4913]: I0121 06:35:10.986906 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.015440 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.018690 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.021449 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf WatchSource:0}: Error finding container 6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf: Status 404 returned error can't find the container with id 6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.038293 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104 WatchSource:0}: Error finding container 9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104: Status 404 returned error can't find the container with id 9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.041155 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.047485 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.054320 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.068347 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74 WatchSource:0}: Error finding container 35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74: Status 404 returned error can't find the container with id 35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74 Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.069800 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9 WatchSource:0}: Error finding container 4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9: Status 404 returned error can't find the container with id 4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9 Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.238331 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.238440 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.299076 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300340 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.300378 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.300852 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.442801 4913 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.448725 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:01:09.525875883 +0000 UTC Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.471038 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.471530 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.532978 4913 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.533110 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.533239 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fe3ad1f82a43e44ade486c7c6224a462c85252d47b3f5fa624ce5961fda7ff9"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.533392 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.534985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535456 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.535509 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35209ec40851f3f43f4739361183b5fea6e439b7cea18436706dbfa7fae1ee74"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537354 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537446 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537478 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"440822bbb71d012ca630012e6cfd14d6bfd90d81c36a5252ad56699e809e69d0"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.537681 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.538474 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.538508 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.538520 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539681 4913 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539757 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539778 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9295f1aef598acfd56193b8756d6ee2a90d499d7c737bda0e9aeb17d2f8ac104"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.539960 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.540156 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.541904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543427 4913 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd" exitCode=0 Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543459 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543482 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6c9fb01627829a70a68ed9eefec31d77fa872cad0ea9163d0777c7a63a9849cf"} Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.543549 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.544381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.544405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:11 crc kubenswrapper[4913]: I0121 06:35:11.544417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.626847 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.626948 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: W0121 06:35:11.632722 4913 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.632778 4913 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Jan 21 06:35:11 crc kubenswrapper[4913]: E0121 06:35:11.855548 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.101642 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102886 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102917 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.102951 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:12 crc kubenswrapper[4913]: E0121 06:35:12.103284 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.449507 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:14:42.241056962 +0000 UTC Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551686 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551742 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551761 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.551772 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.553904 4913 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3" exitCode=0 Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.553967 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554094 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.554852 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.556713 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.558621 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.558683 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.559440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.559461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.559470 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562583 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562659 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562678 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.562801 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.564045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.564077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.564091 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567012 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567039 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567056 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b"} Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.567120 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.568199 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.568238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:12 crc kubenswrapper[4913]: I0121 06:35:12.568251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.450003 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:00:14.446510973 +0000 UTC Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.576117 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5"} Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.576240 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.577528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.577573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.577613 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580271 4913 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b" exitCode=0 Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580349 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b"} Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580614 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.580806 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.581736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.582265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.582328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.585998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.586075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.586089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.703923 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:13 crc kubenswrapper[4913]: I0121 06:35:13.705896 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.005370 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.014146 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.450245 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:09:04.934277071 +0000 UTC Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588805 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4"} Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588889 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588897 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa"} Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588935 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2"} Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588953 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.588977 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590549 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.590673 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.630758 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.630967 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.632817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.632864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:14 crc kubenswrapper[4913]: I0121 06:35:14.632882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.451486 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:15:00.309401415 +0000 UTC Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599550 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599692 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599719 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599550 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19"} Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.599808 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400"} Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601182 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601241 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601262 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601409 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:15 crc kubenswrapper[4913]: I0121 06:35:15.601481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.451952 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:34:07.69266326 +0000 UTC Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.561949 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.602264 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.602387 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.602451 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.604941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:16 crc kubenswrapper[4913]: I0121 06:35:16.947209 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.081428 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.081709 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.081776 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.083490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.083554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.083580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.452327 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:58:48.009605249 +0000 UTC Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.577921 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.605176 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.605299 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.605419 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.606669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.606713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.606729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.607452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.607543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:17 crc kubenswrapper[4913]: I0121 06:35:17.607567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.333450 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.453457 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:31:55.384987993 +0000 UTC Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.486876 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.487072 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.487208 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.488978 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.489043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.489067 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.609222 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.611068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.611130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:18 crc kubenswrapper[4913]: I0121 06:35:18.611154 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:19 crc kubenswrapper[4913]: I0121 06:35:19.454383 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:34:32.13335298 +0000 UTC Jan 21 06:35:19 crc kubenswrapper[4913]: I0121 06:35:19.563005 4913 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 06:35:19 crc kubenswrapper[4913]: I0121 06:35:19.563113 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.385701 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.386020 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.388036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.388119 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.388139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:20 crc kubenswrapper[4913]: I0121 06:35:20.454717 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:54:26.366544738 +0000 UTC Jan 21 06:35:20 crc kubenswrapper[4913]: E0121 06:35:20.602011 4913 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 06:35:21 crc kubenswrapper[4913]: I0121 06:35:21.455658 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:14:13.620693737 +0000 UTC Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.443945 4913 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.456420 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:41:02.405060094 +0000 UTC Jan 21 06:35:22 crc kubenswrapper[4913]: E0121 06:35:22.559079 4913 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.924553 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.924761 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.926061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.926089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:22 crc kubenswrapper[4913]: I0121 06:35:22.926097 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.110637 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.110782 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.132575 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.132662 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 06:35:23 crc kubenswrapper[4913]: I0121 06:35:23.457815 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:26:21.855650178 +0000 UTC Jan 21 06:35:24 crc kubenswrapper[4913]: I0121 06:35:24.457985 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 08:32:42.274699282 +0000 UTC Jan 21 06:35:25 crc kubenswrapper[4913]: I0121 06:35:25.458792 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:15:20.265502574 +0000 UTC Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.460007 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:50:26.683927078 +0000 UTC Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.561873 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.573277 4913 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.592125 4913 csr.go:261] certificate signing request csr-h7hvp is approved, waiting to be issued Jan 21 06:35:26 crc kubenswrapper[4913]: I0121 06:35:26.598093 4913 csr.go:257] certificate signing request csr-h7hvp is issued Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.090841 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.091016 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.092188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.092223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.092236 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.096394 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.460338 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:54:18.406324311 +0000 UTC Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.599681 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 06:30:26 +0000 UTC, rotation deadline is 2026-11-10 07:06:07.287017675 +0000 UTC Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.599747 4913 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7032h30m39.687274964s for next certificate rotation Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.634731 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.635968 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.636035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:27 crc kubenswrapper[4913]: I0121 06:35:27.636054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.096669 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.098902 4913 trace.go:236] Trace[447619719]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:14.108) (total time: 13989ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[447619719]: ---"Objects listed" error: 13989ms (06:35:28.098) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[447619719]: [13.989910874s] [13.989910874s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.098939 4913 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.101133 4913 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.101866 4913 trace.go:236] Trace[681385259]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:13.357) (total time: 14744ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[681385259]: ---"Objects listed" error: 14744ms (06:35:28.101) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[681385259]: [14.744047881s] [14.744047881s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.101897 4913 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.102335 4913 trace.go:236] Trace[577624512]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:13.742) (total time: 14360ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[577624512]: ---"Objects listed" error: 14359ms (06:35:28.102) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[577624512]: [14.36002141s] [14.36002141s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.102359 4913 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.103462 4913 trace.go:236] Trace[694063199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 06:35:13.978) (total time: 14122ms): Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[694063199]: ---"Objects listed" error: 14122ms (06:35:28.101) Jan 21 06:35:28 crc kubenswrapper[4913]: Trace[694063199]: [14.122284153s] [14.122284153s] END Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.103494 4913 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.103682 4913 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.338302 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.431323 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.441720 4913 apiserver.go:52] "Watching apiserver" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.443696 4913 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444087 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sqswg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-jpn7w","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-multus/multus-gn6lz"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444431 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444544 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444654 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.444839 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444944 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445254 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.445400 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.444951 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445553 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.445637 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445655 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.445801 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.446894 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450426 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450524 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450614 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450744 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.450972 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451093 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451109 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451249 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451257 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451269 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451313 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451368 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451444 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451462 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451476 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451582 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451614 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.451661 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452014 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452799 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452865 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.452977 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.454988 4913 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.460563 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:04:46.443890349 +0000 UTC Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.472634 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.486428 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.501096 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505821 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505855 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505873 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505891 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505908 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505924 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505939 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505954 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505969 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.505985 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506001 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506017 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506032 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506048 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506109 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506132 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506158 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506175 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506196 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506375 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506401 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506214 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506420 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506432 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506462 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506793 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506582 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506663 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506761 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506785 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507132 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507225 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.506817 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507287 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507309 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507325 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507354 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507371 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507389 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507404 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507421 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507437 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507453 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507468 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507484 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507499 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507516 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507533 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507549 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507564 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507580 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507615 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507634 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507656 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507679 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507702 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507720 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507737 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507758 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507783 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507810 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507832 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507851 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507871 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507889 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507921 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507938 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507953 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507968 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507983 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.507999 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508014 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508032 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508046 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508062 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508081 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508098 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508130 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508144 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508160 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508175 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508189 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508207 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508222 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508254 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508271 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508287 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508302 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508323 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508338 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508353 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508370 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508387 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508401 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508418 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508433 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508449 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508464 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508481 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508496 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508511 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508526 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508543 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508558 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508572 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508605 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508622 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508638 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508652 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508667 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508681 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508697 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508729 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508758 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508777 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508792 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508807 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508822 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508840 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508857 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508872 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508886 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508901 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508916 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508932 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508946 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508961 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508977 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.508993 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509011 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509028 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509045 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509062 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509077 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509094 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509111 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509127 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509156 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509173 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509189 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509205 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509220 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509236 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509252 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509268 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509284 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509300 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509349 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509366 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509382 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509398 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509414 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509431 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509446 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509462 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509477 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509494 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509514 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509531 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509547 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509563 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509578 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509608 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509626 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509643 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509659 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509675 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509690 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509706 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509722 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509738 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509835 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509853 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509868 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509883 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509934 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509951 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509968 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.509984 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510000 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510018 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510035 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510053 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510069 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510086 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510125 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510142 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510157 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510174 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510190 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510209 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510226 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510243 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510261 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510277 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510295 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510332 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510349 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510365 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510382 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510399 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510441 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510464 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510491 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cnibin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510507 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-kubelet\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510525 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-daemon-config\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510546 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510563 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-multus\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510582 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6s4k\" (UniqueName: \"kubernetes.io/projected/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-kube-api-access-c6s4k\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510794 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510822 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510849 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510875 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/941d5e91-9bf3-44dc-be69-629cb2516e7c-rootfs\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510893 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-multus-certs\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510910 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/941d5e91-9bf3-44dc-be69-629cb2516e7c-proxy-tls\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510926 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-hosts-file\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510943 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-hostroot\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510959 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-conf-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510974 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.510992 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511012 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511033 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzhr\" (UniqueName: \"kubernetes.io/projected/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-kube-api-access-jpzhr\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511048 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511067 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-os-release\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511081 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-bin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511095 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-etc-kubernetes\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511114 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511131 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-k8s-cni-cncf-io\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511147 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-netns\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511163 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/941d5e91-9bf3-44dc-be69-629cb2516e7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511181 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511210 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511229 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511247 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511262 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cni-binary-copy\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511278 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-socket-dir-parent\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlg6n\" (UniqueName: \"kubernetes.io/projected/941d5e91-9bf3-44dc-be69-629cb2516e7c-kube-api-access-rlg6n\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511319 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-system-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511380 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511392 4913 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511402 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511413 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511422 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511432 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511441 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511451 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511461 4913 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511474 4913 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511488 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511500 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511513 4913 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.511526 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.512833 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.513425 4913 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.513937 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514155 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514206 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514238 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514239 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514435 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514457 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514509 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514415 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514752 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.514927 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515314 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515303 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515402 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515418 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515467 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515536 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515647 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.515955 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516223 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516505 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516897 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.516977 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517213 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517470 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517660 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517768 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517898 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.517906 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518122 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518156 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518315 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518349 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518521 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518725 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518780 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.518910 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.519047 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.520657 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526279 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.520854 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.520998 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.521156 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.521376 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.522260 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.522714 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525115 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525302 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525325 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525536 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525755 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525917 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525959 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.525987 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526002 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526447 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.526635 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526645 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.526721 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.026697107 +0000 UTC m=+18.823056770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.526903 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527012 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527177 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527573 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527699 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527961 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528267 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528543 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528626 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.528933 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529195 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529439 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529688 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529761 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.530108 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.530174 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.030154879 +0000 UTC m=+18.826514622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.530276 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.529938 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531190 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531565 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531607 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531717 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531918 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.531983 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532140 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532178 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532263 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527675 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532398 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532581 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532716 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532787 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532823 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532884 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533338 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533367 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533379 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.533435 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.033422326 +0000 UTC m=+18.829781999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.533898 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534037 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534386 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534404 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534649 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.532913 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534863 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534925 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535094 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535209 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535124 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.535947 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536125 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536418 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536611 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536696 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536815 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536987 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.536958 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537022 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537092 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537189 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537339 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537424 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537445 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537475 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537685 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.537772 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.537842 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.037821684 +0000 UTC m=+18.834181357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538001 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538014 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538033 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538146 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538215 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538240 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538292 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538463 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.538546 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.539916 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.539929 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539963 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.540003 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.039990441 +0000 UTC m=+18.836350114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539999 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540074 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.534228 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540484 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540519 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540609 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540670 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538641 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540799 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540918 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.540985 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541062 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541193 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541520 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541546 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541738 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.541754 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538709 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538722 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538748 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538758 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538875 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538952 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527876 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539158 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539404 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539470 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539762 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542185 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542247 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542578 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.538674 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.527847 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.539092 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542819 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.542865 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.543383 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.544307 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.545172 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.545954 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.546385 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.546498 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.549948 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.550432 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552486 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552564 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552907 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.552926 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.553041 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.553486 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554423 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554689 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554764 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554788 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554825 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554900 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.554954 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.555470 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.555570 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.556338 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.559248 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.562246 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.562855 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.563075 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.565339 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.565754 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.566631 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.568871 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.570377 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.571088 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.572424 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.572400 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.574462 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.576201 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.577078 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.578783 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.579686 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.580799 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.581211 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.581870 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.583235 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.583228 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.584434 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.585152 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.586486 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.586965 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588162 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588669 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588789 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.588857 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.589322 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.590309 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.590807 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.592142 4913 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.592248 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.593092 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.594067 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.595033 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.595472 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.596934 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.597950 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.598498 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.599563 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.600228 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.600688 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.601829 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.602803 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.602802 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.603388 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.604218 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.604883 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.605732 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.606446 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.607293 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.607758 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.608244 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.609141 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.609895 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.610842 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612333 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-multus\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612373 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6s4k\" (UniqueName: \"kubernetes.io/projected/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-kube-api-access-c6s4k\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612412 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/941d5e91-9bf3-44dc-be69-629cb2516e7c-rootfs\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612442 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-multus-certs\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612469 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-multus\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612516 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-multus-certs\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612580 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/941d5e91-9bf3-44dc-be69-629cb2516e7c-rootfs\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612473 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/941d5e91-9bf3-44dc-be69-629cb2516e7c-proxy-tls\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612718 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-hosts-file\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612760 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612833 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzhr\" (UniqueName: \"kubernetes.io/projected/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-kube-api-access-jpzhr\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612860 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612877 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-hosts-file\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-os-release\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612910 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-hostroot\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612936 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-conf-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612950 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.612981 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-k8s-cni-cncf-io\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613006 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-netns\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613034 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-bin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613051 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-os-release\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613063 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-etc-kubernetes\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613081 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-conf-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613011 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-hostroot\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613112 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-k8s-cni-cncf-io\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613107 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-run-netns\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613129 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/941d5e91-9bf3-44dc-be69-629cb2516e7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613114 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613167 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-etc-kubernetes\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613186 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlg6n\" (UniqueName: \"kubernetes.io/projected/941d5e91-9bf3-44dc-be69-629cb2516e7c-kube-api-access-rlg6n\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613217 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-system-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613213 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-cni-bin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613366 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-system-cni-dir\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613392 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cni-binary-copy\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613436 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-socket-dir-parent\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613458 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cnibin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613474 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-kubelet\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613513 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-daemon-config\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613547 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613562 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cnibin\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613647 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613692 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-socket-dir-parent\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613699 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613736 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-host-var-lib-kubelet\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613752 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613792 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613802 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613812 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613822 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613832 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613868 4913 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613877 4913 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613887 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613898 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613908 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613941 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613952 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613963 4913 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613973 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613982 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.613992 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614027 4913 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614037 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614046 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614056 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614066 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614100 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614111 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614120 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614126 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-multus-daemon-config\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614131 4913 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614163 4913 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614174 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614186 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614198 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614208 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614218 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614220 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-cni-binary-copy\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614228 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614287 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614302 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614317 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614331 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614344 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614356 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614369 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614383 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614397 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614410 4913 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614423 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614436 4913 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614451 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614467 4913 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614479 4913 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614490 4913 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614502 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614515 4913 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614527 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614537 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/941d5e91-9bf3-44dc-be69-629cb2516e7c-mcd-auth-proxy-config\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614544 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614598 4913 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614610 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614621 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614630 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614640 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614649 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614760 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614772 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614781 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614792 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614803 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614813 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614823 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614833 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614842 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614893 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614904 4913 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614935 4913 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614944 4913 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614953 4913 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.614962 4913 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615005 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615015 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615024 4913 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615032 4913 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615040 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615049 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615057 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615066 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615075 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615083 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615092 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615101 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615110 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615122 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615133 4913 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615141 4913 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615151 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615159 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615167 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615176 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615186 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615197 4913 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615209 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615218 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615228 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615237 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615245 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615254 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615262 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615271 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615281 4913 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615292 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615300 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615308 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615318 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615326 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615334 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615343 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615351 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615359 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615367 4913 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615375 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615385 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615393 4913 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615401 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615409 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615416 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615425 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615432 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.615440 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616151 4913 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616166 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616174 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616184 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616192 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616211 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616220 4913 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616228 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616236 4913 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616246 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616258 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616266 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616275 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616285 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616293 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616301 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616309 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616319 4913 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616329 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616339 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616350 4913 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616361 4913 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616372 4913 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616382 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616393 4913 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616403 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616412 4913 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616424 4913 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616433 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616441 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616451 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616460 4913 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616469 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616478 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616487 4913 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616496 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616506 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616515 4913 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616525 4913 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616533 4913 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616542 4913 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616551 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616561 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616569 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616579 4913 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616603 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616613 4913 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616622 4913 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616632 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616640 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616650 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616658 4913 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616668 4913 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616678 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.616687 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.618605 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/941d5e91-9bf3-44dc-be69-629cb2516e7c-proxy-tls\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.629404 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6s4k\" (UniqueName: \"kubernetes.io/projected/b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf-kube-api-access-c6s4k\") pod \"multus-gn6lz\" (UID: \"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\") " pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.629505 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzhr\" (UniqueName: \"kubernetes.io/projected/e6f47ec5-848c-4b9b-9828-8dd3ddb96a18-kube-api-access-jpzhr\") pod \"node-resolver-jpn7w\" (UID: \"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\") " pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.633778 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlg6n\" (UniqueName: \"kubernetes.io/projected/941d5e91-9bf3-44dc-be69-629cb2516e7c-kube-api-access-rlg6n\") pod \"machine-config-daemon-sqswg\" (UID: \"941d5e91-9bf3-44dc-be69-629cb2516e7c\") " pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.640197 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.643150 4913 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.651389 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.661861 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.669454 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.680292 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.693427 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.708311 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.718642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.729297 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.733917 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58508->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.733988 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58508->192.168.126.11:17697: read: connection reset by peer" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.733930 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55662->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.734140 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55662->192.168.126.11:17697: read: connection reset by peer" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.734684 4913 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.734716 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.741503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.744320 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2lxrr"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.745006 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.745257 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wfcsc"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.745945 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.746136 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.746304 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.746564 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.746837 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.747776 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751270 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751334 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751334 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751373 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751548 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.751823 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.752171 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.754516 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.760773 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.769984 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.773240 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.784142 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.789279 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.792861 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.800138 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jpn7w" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.803887 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.809173 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gn6lz" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.816978 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.820919 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.820972 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4r7z\" (UniqueName: \"kubernetes.io/projected/60ed8982-ee20-4330-861f-61509c39bbe7-kube-api-access-t4r7z\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821016 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821042 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821064 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821084 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.821106 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825352 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825431 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825577 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825695 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825746 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jfg\" (UniqueName: \"kubernetes.io/projected/0e8f223b-fd76-4720-a29f-cb89654e33f5-kube-api-access-h8jfg\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825843 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825889 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825938 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.825974 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826012 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826050 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-os-release\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826089 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826132 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826175 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826211 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-cnibin\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826275 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826323 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826353 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826401 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826440 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826482 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.826520 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.842292 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.851523 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.864513 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.876127 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.884792 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.894093 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.901661 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.915865 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.926918 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927239 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927291 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927313 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927338 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-os-release\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927361 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927382 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927404 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927432 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-cnibin\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927437 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927463 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927379 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927439 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927505 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927401 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927523 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.927567 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927577 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-os-release\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927623 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-cnibin\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927630 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: E0121 06:35:28.927691 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:29.427670096 +0000 UTC m=+19.224029769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927706 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927767 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927731 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927836 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927875 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927927 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927931 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-system-cni-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927960 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927990 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.927993 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4r7z\" (UniqueName: \"kubernetes.io/projected/60ed8982-ee20-4330-861f-61509c39bbe7-kube-api-access-t4r7z\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928049 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928088 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928133 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928170 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928216 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928263 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928297 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928327 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928361 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928373 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928383 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928394 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jfg\" (UniqueName: \"kubernetes.io/projected/0e8f223b-fd76-4720-a29f-cb89654e33f5-kube-api-access-h8jfg\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928412 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928421 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928420 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928449 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928481 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928518 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928454 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928777 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928856 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928900 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0e8f223b-fd76-4720-a29f-cb89654e33f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.928988 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.929185 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.932730 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.954547 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0e8f223b-fd76-4720-a29f-cb89654e33f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.957680 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"ovnkube-node-c7xtt\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:28 crc kubenswrapper[4913]: W0121 06:35:28.959696 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba WatchSource:0}: Error finding container 6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba: Status 404 returned error can't find the container with id 6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.969615 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4r7z\" (UniqueName: \"kubernetes.io/projected/60ed8982-ee20-4330-861f-61509c39bbe7-kube-api-access-t4r7z\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:28 crc kubenswrapper[4913]: I0121 06:35:28.974629 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jfg\" (UniqueName: \"kubernetes.io/projected/0e8f223b-fd76-4720-a29f-cb89654e33f5-kube-api-access-h8jfg\") pod \"multus-additional-cni-plugins-2lxrr\" (UID: \"0e8f223b-fd76-4720-a29f-cb89654e33f5\") " pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.029057 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:29 crc kubenswrapper[4913]: W0121 06:35:29.029252 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f47ec5_848c_4b9b_9828_8dd3ddb96a18.slice/crio-9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d WatchSource:0}: Error finding container 9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d: Status 404 returned error can't find the container with id 9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.029316 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.029452 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.029407302 +0000 UTC m=+19.825767145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.062476 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.075768 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:29 crc kubenswrapper[4913]: W0121 06:35:29.098107 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e8f223b_fd76_4720_a29f_cb89654e33f5.slice/crio-cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd WatchSource:0}: Error finding container cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd: Status 404 returned error can't find the container with id cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd Jan 21 06:35:29 crc kubenswrapper[4913]: W0121 06:35:29.100098 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe1e161_7227_48ff_824e_01d26e5c7218.slice/crio-2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8 WatchSource:0}: Error finding container 2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8: Status 404 returned error can't find the container with id 2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129633 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129787 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129839 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.129871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130037 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130066 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130080 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130134 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.130117579 +0000 UTC m=+19.926477252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130928 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.130896479 +0000 UTC m=+19.927256152 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.130968 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.131009 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.131052 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.131153 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.131130095 +0000 UTC m=+19.927489768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.134004 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.134483 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.134453565 +0000 UTC m=+19.930813228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.433224 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.433456 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.433535 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:30.433515475 +0000 UTC m=+20.229875158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.461237 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:52:35.634672229 +0000 UTC Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.525832 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.526015 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.526105 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:29 crc kubenswrapper[4913]: E0121 06:35:29.526211 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.641578 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6dbeeeeca7a8d9ef6a945adcea1c98b4773d730ff8320fbdfcbd41b99dc5ddba"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.643444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.643518 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.643534 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fa53d1d26f46d79ba2712bdef32dc78c68266a4dcbf6484c4a4f97fa72127511"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.644962 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jpn7w" event={"ID":"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18","Type":"ContainerStarted","Data":"55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.645020 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jpn7w" event={"ID":"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18","Type":"ContainerStarted","Data":"9807cabbb6df0430dfff1508c321d5e7e1a61d172a63f61f42a206bcc144557d"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.646347 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.646379 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"b48afd46fe5786572eb363a2c7f5ee1a2f4a64a17faf9e11435f088851553a0f"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.648039 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c" exitCode=0 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.648118 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.648150 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerStarted","Data":"cfd98727d4fdb5a9b3135671e6b4298f9f9e6c0badd2c3ea525a0a65894e95cd"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.650025 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.650074 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1cf4efd45fe179ee8c2d0c28f1b8b52d745fae6fec8517534c6e79d837dcd3b1"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.652457 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.654179 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5" exitCode=255 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.654246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.657047 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.657109 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.657123 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"f7eeb75a512ebcc9379120edfa63ce55c0ba381fb18a26f4c7e7e0c9e4af6357"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.658415 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" exitCode=0 Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.658596 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.658649 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8"} Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.664017 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.673584 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.688321 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.699738 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.709536 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.721893 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.734009 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.752491 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.756968 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.757681 4913 scope.go:117] "RemoveContainer" containerID="52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.795944 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.852837 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.883846 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.921871 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.944860 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.975303 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:29 crc kubenswrapper[4913]: I0121 06:35:29.998927 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:29Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.013688 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.025057 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.034192 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.042282 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.042397 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.042482 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.042463584 +0000 UTC m=+21.838823257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.054541 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.067219 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.079452 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.093726 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.112526 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.125466 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.137908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.143744 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.143841 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.143808019 +0000 UTC m=+21.940167692 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.143926 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.143973 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.144004 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144110 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144128 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144132 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144140 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144147 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144153 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144148 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144191 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.144184779 +0000 UTC m=+21.940544452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144206 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.144199929 +0000 UTC m=+21.940559602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.144229 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.14421154 +0000 UTC m=+21.940571213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.160337 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.174494 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.358038 4913 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.447134 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.447263 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.447317 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:32.447304817 +0000 UTC m=+22.243664490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.462235 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:47:44.093370318 +0000 UTC Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.526074 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.526189 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.526320 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:30 crc kubenswrapper[4913]: E0121 06:35:30.526411 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.532518 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.533646 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.534796 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.539561 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.558947 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.577908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.593430 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.609726 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.635585 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.651157 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.665775 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.669884 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.671375 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.671643 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.674262 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.674378 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.675878 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerStarted","Data":"bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243"} Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.683457 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.696546 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.709920 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.723669 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.732646 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.754836 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.766325 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.779240 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.790236 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.802922 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.821244 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.833844 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.854313 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.892825 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.919642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.936126 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.956470 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.973953 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:30 crc kubenswrapper[4913]: I0121 06:35:30.994652 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.009732 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.303559 4913 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306330 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.306480 4913 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.313726 4913 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.313974 4913 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315218 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315253 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.315329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.337151 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340859 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340922 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.340956 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.357498 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360778 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360795 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.360807 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.376644 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380470 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.380515 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.393982 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.398317 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.412386 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.413031 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415389 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.415521 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.462991 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:20:00.595455093 +0000 UTC Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519226 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.519329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.526299 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.526349 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.526534 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:31 crc kubenswrapper[4913]: E0121 06:35:31.526723 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622888 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622915 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.622929 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683623 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683687 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.683728 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.685829 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243" exitCode=0 Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.685908 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.687961 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.700089 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.716856 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725516 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725563 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.725657 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.728330 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.749415 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.769772 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.780524 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.797653 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.810673 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.823047 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.828929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.828984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.828995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.829013 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.829028 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.839674 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.857326 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.872030 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.884675 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.894690 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.909017 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.923924 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931241 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931274 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.931288 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:31Z","lastTransitionTime":"2026-01-21T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.939730 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.961874 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.978758 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:31 crc kubenswrapper[4913]: I0121 06:35:31.993759 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.004530 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.015297 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.028247 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.033298 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.038337 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-cpmwx"] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.038738 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.040667 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.040782 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.040809 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.041311 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.042341 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.063933 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.064109 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.064175 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.064156573 +0000 UTC m=+25.860516246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.073063 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.121820 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.135163 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.155534 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165053 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165146 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgdl\" (UniqueName: \"kubernetes.io/projected/440ae0d9-f160-4f49-8b38-61c65d93eea4-kube-api-access-2kgdl\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165181 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165243 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165215729 +0000 UTC m=+25.961575402 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165274 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165287 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165297 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165316 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/440ae0d9-f160-4f49-8b38-61c65d93eea4-host\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165330 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165319272 +0000 UTC m=+25.961678945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165345 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/440ae0d9-f160-4f49-8b38-61c65d93eea4-serviceca\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165392 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.165447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165479 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165515 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165507528 +0000 UTC m=+25.961867201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165519 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165530 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165537 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.165563 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.165554459 +0000 UTC m=+25.961914132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.191271 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.232690 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237272 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.237307 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.266952 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/440ae0d9-f160-4f49-8b38-61c65d93eea4-host\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.267005 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/440ae0d9-f160-4f49-8b38-61c65d93eea4-serviceca\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.267104 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgdl\" (UniqueName: \"kubernetes.io/projected/440ae0d9-f160-4f49-8b38-61c65d93eea4-kube-api-access-2kgdl\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.267129 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/440ae0d9-f160-4f49-8b38-61c65d93eea4-host\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.268756 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/440ae0d9-f160-4f49-8b38-61c65d93eea4-serviceca\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.276715 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.308485 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgdl\" (UniqueName: \"kubernetes.io/projected/440ae0d9-f160-4f49-8b38-61c65d93eea4-kube-api-access-2kgdl\") pod \"node-ca-cpmwx\" (UID: \"440ae0d9-f160-4f49-8b38-61c65d93eea4\") " pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.335262 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.339974 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340017 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.340067 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.354064 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cpmwx" Jan 21 06:35:32 crc kubenswrapper[4913]: W0121 06:35:32.366165 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod440ae0d9_f160_4f49_8b38_61c65d93eea4.slice/crio-a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5 WatchSource:0}: Error finding container a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5: Status 404 returned error can't find the container with id a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5 Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.374956 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.415315 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.442877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.452304 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.463618 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:26:25.330634142 +0000 UTC Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.467957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.468091 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.468143 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:36.468129523 +0000 UTC m=+26.264489206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.489916 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.525345 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.525432 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.525537 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:32 crc kubenswrapper[4913]: E0121 06:35:32.525658 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.534241 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.544949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.544984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.544994 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.545012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.545024 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.570786 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.611574 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647064 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647095 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647118 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.647127 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.650056 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.693877 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.693968 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8" exitCode=0 Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.694007 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.695625 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cpmwx" event={"ID":"440ae0d9-f160-4f49-8b38-61c65d93eea4","Type":"ContainerStarted","Data":"954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.695657 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cpmwx" event={"ID":"440ae0d9-f160-4f49-8b38-61c65d93eea4","Type":"ContainerStarted","Data":"a7f6138a3900627d7425afe80f2c1ab28b4fc6fcbe1ba5b081e408d5dbeddbe5"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.731253 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.749987 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750040 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.750093 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.774526 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.819988 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852208 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852631 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852683 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852700 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852722 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.852738 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.897068 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.931192 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.947783 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.958153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.958196 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.958926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.959003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.959030 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:32Z","lastTransitionTime":"2026-01-21T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.961949 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.972623 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:32 crc kubenswrapper[4913]: I0121 06:35:32.994748 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.032228 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.061961 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.061990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.061999 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.062012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.062021 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.073274 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.115068 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.156002 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164900 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164908 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164921 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.164930 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.196738 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.237451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267439 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.267451 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.275156 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.315095 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.352451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.369903 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.392928 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.439484 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.464194 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:52:20.854542089 +0000 UTC Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472621 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.472667 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.475434 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.517082 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.526065 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.526158 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:33 crc kubenswrapper[4913]: E0121 06:35:33.526227 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:33 crc kubenswrapper[4913]: E0121 06:35:33.526344 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.563522 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575414 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575455 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.575506 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.594825 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.632421 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.674938 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677730 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677827 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.677920 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.678024 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.703250 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.705795 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30" exitCode=0 Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.705874 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.717036 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.757161 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780234 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.780322 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.801165 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.842531 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.876871 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886945 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886963 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886983 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.886996 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.914962 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.952055 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990091 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990119 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.990130 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:33Z","lastTransitionTime":"2026-01-21T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:33 crc kubenswrapper[4913]: I0121 06:35:33.996122 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.033671 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.075488 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092812 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.092868 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.120714 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.155145 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.191471 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.194982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195073 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.195121 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.279200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.295585 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299755 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.299958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.300055 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.323002 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.353416 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403164 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403774 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.403916 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.404062 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.411286 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.439150 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.465205 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:44:55.383113781 +0000 UTC Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.474908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506312 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506320 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506332 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.506341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.515833 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.525316 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:34 crc kubenswrapper[4913]: E0121 06:35:34.525548 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.525675 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:34 crc kubenswrapper[4913]: E0121 06:35:34.525912 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.559409 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.597468 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609304 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609334 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.609351 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.641579 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.679096 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.711724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.711995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.712198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.712371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.712553 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.713431 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7" exitCode=0 Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.713478 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.724905 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.758309 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.795586 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.815993 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816073 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.816145 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.838904 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.893915 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.917751 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.919545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:34Z","lastTransitionTime":"2026-01-21T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.958435 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:34 crc kubenswrapper[4913]: I0121 06:35:34.996233 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023427 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.023449 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.049024 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.081232 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.122073 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126501 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.126721 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.154088 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.196866 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229780 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.229803 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.240931 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.279260 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.314477 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332183 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332242 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.332323 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.362634 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435270 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435282 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.435310 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.465700 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:12:54.852559078 +0000 UTC Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.525932 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.526003 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:35 crc kubenswrapper[4913]: E0121 06:35:35.526158 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:35 crc kubenswrapper[4913]: E0121 06:35:35.526279 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537428 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537463 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.537489 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.640341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.721477 4913 generic.go:334] "Generic (PLEG): container finished" podID="0e8f223b-fd76-4720-a29f-cb89654e33f5" containerID="8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851" exitCode=0 Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.721540 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerDied","Data":"8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743804 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.743825 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.754724 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.770921 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.785892 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.798057 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.825081 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.840079 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846162 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846208 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.846234 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.853790 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.876061 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.894049 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.917536 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.929963 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.939895 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948530 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.948625 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:35Z","lastTransitionTime":"2026-01-21T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.951528 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.961756 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.971381 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:35 crc kubenswrapper[4913]: I0121 06:35:35.989542 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050764 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050834 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.050874 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.105258 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.105360 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.105405 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.105393583 +0000 UTC m=+33.901753256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153210 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153259 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.153280 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206459 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.206692 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.206661626 +0000 UTC m=+34.003021339 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206753 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206823 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.206882 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207039 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207075 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207111 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207109 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207133 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207162 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207188 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207113 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.207096768 +0000 UTC m=+34.003456471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207248 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.207221801 +0000 UTC m=+34.003581504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.207271 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.207260282 +0000 UTC m=+34.003619985 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256878 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.256982 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360224 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360366 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360413 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.360432 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464277 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.464328 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.466471 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:03:06.83210215 +0000 UTC Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.509168 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.509388 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.509622 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:35:44.509497607 +0000 UTC m=+34.305857310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.525702 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.525781 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.525909 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:36 crc kubenswrapper[4913]: E0121 06:35:36.526053 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567400 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567439 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.567491 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671242 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.671365 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.732627 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" event={"ID":"0e8f223b-fd76-4720-a29f-cb89654e33f5","Type":"ContainerStarted","Data":"f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.739462 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.740368 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.740618 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.756949 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774628 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774656 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.774703 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.779733 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.781788 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.781905 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.796371 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.835136 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.855486 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.871760 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.877417 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.886804 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.908581 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.926417 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.947173 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.961114 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.976165 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.981140 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:36Z","lastTransitionTime":"2026-01-21T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:36 crc kubenswrapper[4913]: I0121 06:35:36.997352 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.019418 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.039141 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.062870 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.080045 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.085252 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.099455 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.115383 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.141778 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.158137 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.172127 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.183957 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188662 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188683 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.188701 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.214114 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.234691 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.251126 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.272309 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.288933 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290747 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290763 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.290775 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.309852 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.322966 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.333872 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.351200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393380 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.393389 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.467003 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:15:20.118592674 +0000 UTC Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497126 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.497142 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.525920 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.526059 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:37 crc kubenswrapper[4913]: E0121 06:35:37.526247 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:37 crc kubenswrapper[4913]: E0121 06:35:37.526444 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.599980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.600132 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702730 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.702799 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.743661 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.805910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.805966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.805983 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.806008 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.806025 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909673 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909743 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909761 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:37 crc kubenswrapper[4913]: I0121 06:35:37.909803 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:37Z","lastTransitionTime":"2026-01-21T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012762 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012781 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.012794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.116860 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219513 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.219610 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.321975 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424155 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424204 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.424246 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.467991 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:15:16.428549236 +0000 UTC Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.525443 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.525445 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:38 crc kubenswrapper[4913]: E0121 06:35:38.525579 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:38 crc kubenswrapper[4913]: E0121 06:35:38.525681 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527273 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527325 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.527333 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630415 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.630481 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733122 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733239 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.733257 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.745746 4913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836629 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836638 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.836664 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938671 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938746 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:38 crc kubenswrapper[4913]: I0121 06:35:38.938760 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:38Z","lastTransitionTime":"2026-01-21T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041703 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041714 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.041724 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147704 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.147722 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250473 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.250581 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354508 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.354569 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.457926 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.469167 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:52:28.136243248 +0000 UTC Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.525631 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.525634 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:39 crc kubenswrapper[4913]: E0121 06:35:39.525864 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:39 crc kubenswrapper[4913]: E0121 06:35:39.525948 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560717 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560779 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560802 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.560856 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.663711 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765514 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765665 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.765723 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.868920 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:39 crc kubenswrapper[4913]: I0121 06:35:39.971948 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:39Z","lastTransitionTime":"2026-01-21T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075113 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.075125 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177755 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177773 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177793 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.177810 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.280890 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.384195 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.395789 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.411995 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.427938 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.459541 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.470187 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:15:58.851719386 +0000 UTC Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488137 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488163 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.488208 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.496672 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.515039 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.525483 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.525511 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:40 crc kubenswrapper[4913]: E0121 06:35:40.525692 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:40 crc kubenswrapper[4913]: E0121 06:35:40.525804 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.534085 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.550872 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.568366 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.586728 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591212 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.591265 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.603828 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.628986 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.653359 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.680762 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.693975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694021 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694044 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.694481 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.710418 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.728874 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.741796 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.752518 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.770949 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.782118 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.792783 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797277 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.797320 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.821291 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.844719 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.860161 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.874841 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.894503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899645 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899657 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.899689 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:40Z","lastTransitionTime":"2026-01-21T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.914039 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.928474 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.941747 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.958130 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.973619 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:40 crc kubenswrapper[4913]: I0121 06:35:40.986934 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001928 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001946 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001972 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.001991 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.104729 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207178 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.207261 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.310905 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.310982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.311004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.311035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.311057 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414442 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414513 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.414551 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.470660 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:48:18.700567019 +0000 UTC Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517064 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517757 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.517939 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.518132 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.526276 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.526410 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.526658 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.526968 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584142 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584222 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.584306 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.603877 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.608246 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.624467 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629449 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629515 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.629581 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.649090 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653333 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.653450 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.672446 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.676940 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677006 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677044 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.677061 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.692839 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: E0121 06:35:41.692985 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695206 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.695251 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797748 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797796 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797805 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797821 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.797832 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.899659 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r"] Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.900411 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901484 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901516 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.901537 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:41Z","lastTransitionTime":"2026-01-21T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.903275 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.904958 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.921549 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.938544 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.953637 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966097 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5c2\" (UniqueName: \"kubernetes.io/projected/4aaba44f-534c-4eac-9250-e6e737a701bb-kube-api-access-dd5c2\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966221 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966267 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.966365 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4aaba44f-534c-4eac-9250-e6e737a701bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.974038 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:41 crc kubenswrapper[4913]: I0121 06:35:41.989257 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.001978 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.003951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004033 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.004071 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.016429 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.031478 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.046862 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.063275 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067139 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd5c2\" (UniqueName: \"kubernetes.io/projected/4aaba44f-534c-4eac-9250-e6e737a701bb-kube-api-access-dd5c2\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067199 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067228 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.067271 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4aaba44f-534c-4eac-9250-e6e737a701bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.068271 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.069039 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4aaba44f-534c-4eac-9250-e6e737a701bb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.072558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4aaba44f-534c-4eac-9250-e6e737a701bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.075678 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.082875 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd5c2\" (UniqueName: \"kubernetes.io/projected/4aaba44f-534c-4eac-9250-e6e737a701bb-kube-api-access-dd5c2\") pod \"ovnkube-control-plane-749d76644c-kkr2r\" (UID: \"4aaba44f-534c-4eac-9250-e6e737a701bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.100006 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107646 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.107762 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.116402 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.130481 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.143361 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.164702 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.180182 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210065 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.210086 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.220409 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" Jan 21 06:35:42 crc kubenswrapper[4913]: W0121 06:35:42.233901 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4aaba44f_534c_4eac_9250_e6e737a701bb.slice/crio-0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610 WatchSource:0}: Error finding container 0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610: Status 404 returned error can't find the container with id 0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610 Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312517 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312527 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.312554 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.414838 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.471292 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:12:41.300696186 +0000 UTC Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518174 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518259 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.518310 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.526091 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.526188 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:42 crc kubenswrapper[4913]: E0121 06:35:42.526315 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:42 crc kubenswrapper[4913]: E0121 06:35:42.526445 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622846 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622888 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.622906 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725126 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725177 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.725196 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.760877 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" event={"ID":"4aaba44f-534c-4eac-9250-e6e737a701bb","Type":"ContainerStarted","Data":"56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.760945 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" event={"ID":"4aaba44f-534c-4eac-9250-e6e737a701bb","Type":"ContainerStarted","Data":"0763cdbfa7b2e96c3b31e9dd9714ddcdec94ce8f093eaa54b2b81f271f34a610"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.763382 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/0.log" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.767300 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872" exitCode=1 Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.767352 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.768204 4913 scope.go:117] "RemoveContainer" containerID="9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.786927 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.802270 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.807374 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.823326 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827363 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827715 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.827963 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.844963 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.856965 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.884033 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.919799 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930157 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930168 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.930197 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:42Z","lastTransitionTime":"2026-01-21T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.937776 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.956390 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.970481 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.982681 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:42 crc kubenswrapper[4913]: I0121 06:35:42.997565 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:42Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.014090 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.026272 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.032891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.037906 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.051220 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.060478 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.134992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135042 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.135067 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238503 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.238543 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341150 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341167 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.341179 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443361 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443383 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.443394 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.471909 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:14:05.548697659 +0000 UTC Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.525537 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.525668 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:43 crc kubenswrapper[4913]: E0121 06:35:43.525716 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:43 crc kubenswrapper[4913]: E0121 06:35:43.525798 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.545740 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.648638 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.751967 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752427 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.752451 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.775722 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/0.log" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.781001 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.781536 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.783495 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" event={"ID":"4aaba44f-534c-4eac-9250-e6e737a701bb","Type":"ContainerStarted","Data":"3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.802404 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.819159 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.834162 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.852538 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855436 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855506 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.855623 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.873440 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.888139 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.901635 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.921442 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.936911 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.957639 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.958977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.959097 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:43Z","lastTransitionTime":"2026-01-21T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.969807 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:43 crc kubenswrapper[4913]: I0121 06:35:43.984146 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.006345 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.023654 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.037695 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.051368 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.061839 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.079287 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.099713 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.119253 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.130020 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.150646 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166101 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166167 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166372 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166415 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.166427 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.186198 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.191871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.191960 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.192001 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.191989585 +0000 UTC m=+49.988349258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.196707 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.218283 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.233990 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.249177 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.261963 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268420 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.268444 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.279451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.290886 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293193 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293314 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293283898 +0000 UTC m=+50.089643571 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293382 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293428 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.293457 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293541 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293549 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293549 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293565 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293572 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293579 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293581 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293598 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293575327 +0000 UTC m=+50.089934990 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293639 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293629028 +0000 UTC m=+50.089988781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.293658 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.293651149 +0000 UTC m=+50.090010912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.302940 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.316562 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.328176 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.338752 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.372640 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.472423 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:32:54.218342734 +0000 UTC Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476138 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476224 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.476270 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.525531 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.525559 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.525765 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.525920 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579418 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.579457 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.596725 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.596971 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.597134 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:00.597093785 +0000 UTC m=+50.393453498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683320 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683345 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.683355 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.786965 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787025 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.787087 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.789179 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.790118 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/0.log" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.793459 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" exitCode=1 Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.793556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.793673 4913 scope.go:117] "RemoveContainer" containerID="9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.794790 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:35:44 crc kubenswrapper[4913]: E0121 06:35:44.795078 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.815010 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.832832 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.845975 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.865553 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.888621 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.890099 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.907538 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.923907 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.944444 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ef030b13614552e61f432143a158dd8cb1bc049617d8531387101ac500d3872\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:42Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI0121 06:35:39.725089 6225 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0121 06:35:39.725459 6225 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:35:39.725527 6225 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:35:39.725626 6225 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 06:35:39.725686 6225 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:35:39.725721 6225 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:35:39.725727 6225 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 06:35:39.725637 6225 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:35:39.725784 6225 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:35:39.725779 6225 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:35:39.725827 6225 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 06:35:39.725853 6225 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:35:39.725962 6225 factory.go:656] Stopping watch factory\\\\nI0121 06:35:39.725987 6225 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.964358 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.983959 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993071 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:44 crc kubenswrapper[4913]: I0121 06:35:44.993104 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:44Z","lastTransitionTime":"2026-01-21T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.000284 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.020570 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.038640 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.063116 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.075513 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.087682 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.095337 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.100750 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.198921 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.198982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.199005 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.199034 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.199054 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.301990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.302110 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.405865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.405949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.405971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.406002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.406020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.472912 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:50:44.066592592 +0000 UTC Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.508970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509060 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.509118 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.526293 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.526361 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:45 crc kubenswrapper[4913]: E0121 06:35:45.526472 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:45 crc kubenswrapper[4913]: E0121 06:35:45.526570 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611378 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611400 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.611457 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.713964 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714085 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.714107 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.800729 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.805046 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:35:45 crc kubenswrapper[4913]: E0121 06:35:45.805181 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816751 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.816813 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.826262 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.844163 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.863503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.881582 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.896075 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.910649 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919147 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.919190 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:45Z","lastTransitionTime":"2026-01-21T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.923230 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.940619 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.952405 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.969511 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:45 crc kubenswrapper[4913]: I0121 06:35:45.981096 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:45Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.004689 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022046 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022167 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.022247 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.024394 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.036858 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.051851 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.073157 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.085309 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124379 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124390 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.124414 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227738 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227834 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.227888 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331058 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.331211 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434007 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.434085 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.473950 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:33:20.499203814 +0000 UTC Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.525691 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.525730 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:46 crc kubenswrapper[4913]: E0121 06:35:46.525896 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:46 crc kubenswrapper[4913]: E0121 06:35:46.526035 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536484 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.536502 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639723 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.639752 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742211 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.742246 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845116 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845138 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845169 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.845193 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948171 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:46 crc kubenswrapper[4913]: I0121 06:35:46.948239 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:46Z","lastTransitionTime":"2026-01-21T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051273 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051346 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051366 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051392 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.051408 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154651 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.154673 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.257338 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.359721 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461934 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.461959 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.474372 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:45:36.890455608 +0000 UTC Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.525651 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.525728 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:47 crc kubenswrapper[4913]: E0121 06:35:47.525826 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:47 crc kubenswrapper[4913]: E0121 06:35:47.526039 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565303 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565333 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.565347 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.667770 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771165 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771277 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.771295 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874127 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874156 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.874180 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976241 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976255 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:47 crc kubenswrapper[4913]: I0121 06:35:47.976267 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:47Z","lastTransitionTime":"2026-01-21T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.078705 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.182116 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285416 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285523 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.285539 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389336 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389366 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.389380 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.474774 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:10:24.101070046 +0000 UTC Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491714 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.491791 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.525756 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:48 crc kubenswrapper[4913]: E0121 06:35:48.525930 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.526082 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:48 crc kubenswrapper[4913]: E0121 06:35:48.526252 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594686 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.594699 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697425 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697495 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697517 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.697535 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.800856 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.903863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.903957 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.903973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.904004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:48 crc kubenswrapper[4913]: I0121 06:35:48.904020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:48Z","lastTransitionTime":"2026-01-21T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.006958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.007098 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110402 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110412 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.110444 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214107 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214135 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214162 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.214180 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317379 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.317403 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420909 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.420971 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.475113 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:46:58.316472197 +0000 UTC Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523498 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.523536 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.525845 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.525845 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:49 crc kubenswrapper[4913]: E0121 06:35:49.525992 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:49 crc kubenswrapper[4913]: E0121 06:35:49.526106 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.626986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627066 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.627126 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730230 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730372 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.730392 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833529 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.833730 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937324 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:49 crc kubenswrapper[4913]: I0121 06:35:49.937369 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:49Z","lastTransitionTime":"2026-01-21T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.040542 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143909 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.143941 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.247189 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.350186 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453338 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.453422 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.476105 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:43:19.387536289 +0000 UTC Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.525638 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.525676 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:50 crc kubenswrapper[4913]: E0121 06:35:50.525809 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:50 crc kubenswrapper[4913]: E0121 06:35:50.525913 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.539854 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556192 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556273 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.556316 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.557789 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.579232 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.598124 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.617372 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.640551 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659161 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.659870 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.687373 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.709061 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.734844 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.752255 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.762337 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.784320 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.807205 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.825442 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.841177 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.865885 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.874102 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.891711 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968407 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968484 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968507 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968534 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:50 crc kubenswrapper[4913]: I0121 06:35:50.968550 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:50Z","lastTransitionTime":"2026-01-21T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.070891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.070960 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.070982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.071011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.071036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174338 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174416 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.174438 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278295 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.278375 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381623 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381721 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.381779 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.477039 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:38:43.007070755 +0000 UTC Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485368 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.485430 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.525917 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.525993 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.526233 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.526387 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.588932 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589028 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.589078 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692738 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.692769 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.795629 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880624 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.880753 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.902250 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909493 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909514 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.909563 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.930254 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936141 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.936162 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.958584 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963368 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963413 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.963431 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:51 crc kubenswrapper[4913]: E0121 06:35:51.983926 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:51 crc kubenswrapper[4913]: I0121 06:35:51.988857 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:51Z","lastTransitionTime":"2026-01-21T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.201759 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:35:52Z is after 2025-08-24T17:21:41Z" Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.202005 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204418 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.204487 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307935 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307955 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.307982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.308001 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411390 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411518 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411569 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.411623 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.477707 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:11:16.511495618 +0000 UTC Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.514351 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.525725 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.525740 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.525962 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:52 crc kubenswrapper[4913]: E0121 06:35:52.526078 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617218 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.617236 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720269 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.720356 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823703 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823721 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823746 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.823766 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926679 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926706 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:52 crc kubenswrapper[4913]: I0121 06:35:52.926726 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:52Z","lastTransitionTime":"2026-01-21T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.030193 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133178 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.133317 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236803 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236868 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236886 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.236931 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340572 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.340632 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443718 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.443740 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.478323 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:55:01.071176251 +0000 UTC Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.526008 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.526133 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:53 crc kubenswrapper[4913]: E0121 06:35:53.526241 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:53 crc kubenswrapper[4913]: E0121 06:35:53.526421 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546902 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546959 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.546981 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650289 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.650329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.753963 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754046 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.754096 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857418 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857447 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.857457 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960147 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960216 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960236 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:53 crc kubenswrapper[4913]: I0121 06:35:53.960251 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:53Z","lastTransitionTime":"2026-01-21T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063706 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063747 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.063767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166781 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166864 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166887 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.166908 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.269953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.269999 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.270015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.270038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.270053 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.373280 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.475978 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476049 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476117 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.476134 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.479202 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:57:50.436649418 +0000 UTC Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.525809 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.525994 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:54 crc kubenswrapper[4913]: E0121 06:35:54.526376 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:54 crc kubenswrapper[4913]: E0121 06:35:54.526661 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578717 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.578745 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.682704 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785960 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.785990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.786009 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.888995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889063 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889080 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.889122 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:54 crc kubenswrapper[4913]: I0121 06:35:54.992633 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:54Z","lastTransitionTime":"2026-01-21T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095841 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.095860 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199399 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.199649 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.302971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.303102 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409769 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.409951 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.480074 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:36:17.041729673 +0000 UTC Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512780 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.512809 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.526068 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.526089 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:55 crc kubenswrapper[4913]: E0121 06:35:55.526236 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:55 crc kubenswrapper[4913]: E0121 06:35:55.526392 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.616634 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719403 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.719431 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822546 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822578 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.822616 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924859 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.924980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:55 crc kubenswrapper[4913]: I0121 06:35:55.925002 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:55Z","lastTransitionTime":"2026-01-21T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027812 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027829 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.027842 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.130982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131062 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131086 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.131137 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234197 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234343 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.234366 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.337921 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440757 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.440777 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.480484 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:10:53.911449329 +0000 UTC Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.526251 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.526321 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:56 crc kubenswrapper[4913]: E0121 06:35:56.526503 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:56 crc kubenswrapper[4913]: E0121 06:35:56.526646 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543311 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.543367 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.646996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.647103 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750378 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.750436 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.852910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853101 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.853119 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955832 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:56 crc kubenswrapper[4913]: I0121 06:35:56.955959 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:56Z","lastTransitionTime":"2026-01-21T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.058923 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.058986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.059004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.059027 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.059048 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161508 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161563 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.161616 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.264125 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366516 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366572 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366625 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.366638 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469546 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.469670 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.480640 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:57:14.629388133 +0000 UTC Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.525320 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.525325 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:57 crc kubenswrapper[4913]: E0121 06:35:57.525539 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:57 crc kubenswrapper[4913]: E0121 06:35:57.525643 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.571992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572076 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.572094 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675385 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.675404 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.778964 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779070 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.779122 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882826 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882960 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.882985 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.985991 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986044 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:57 crc kubenswrapper[4913]: I0121 06:35:57.986071 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:57Z","lastTransitionTime":"2026-01-21T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089155 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.089264 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191872 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191965 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.191987 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295083 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.295140 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397940 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.397952 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.480794 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:35:30.104732182 +0000 UTC Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501107 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.501262 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.525842 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.525853 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:35:58 crc kubenswrapper[4913]: E0121 06:35:58.526026 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:35:58 crc kubenswrapper[4913]: E0121 06:35:58.526192 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603304 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.603347 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706831 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706861 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.706881 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809160 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809234 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809263 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.809279 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:58 crc kubenswrapper[4913]: I0121 06:35:58.911891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:58Z","lastTransitionTime":"2026-01-21T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.014963 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116872 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.116881 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220113 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220137 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.220155 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.323544 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.426463 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.481162 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:29:58.084062284 +0000 UTC Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.525888 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.525959 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:35:59 crc kubenswrapper[4913]: E0121 06:35:59.526339 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:35:59 crc kubenswrapper[4913]: E0121 06:35:59.526557 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.526721 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.529852 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634897 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634919 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634946 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.634966 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.738443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.739408 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844332 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.844350 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947209 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:35:59 crc kubenswrapper[4913]: I0121 06:35:59.947295 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:35:59Z","lastTransitionTime":"2026-01-21T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.051611 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155255 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155304 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155357 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.155397 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259423 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259433 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259448 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.259458 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.287159 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.287340 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.287427 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.287406773 +0000 UTC m=+82.083766446 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362365 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362532 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362553 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362577 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.362634 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388377 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.388669 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.388627573 +0000 UTC m=+82.184987286 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388790 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388894 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.388962 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389053 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389094 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389118 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389212 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389212 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389217 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.389186378 +0000 UTC m=+82.185546241 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389362 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.389331432 +0000 UTC m=+82.185691145 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389237 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389399 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.389495 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.389475065 +0000 UTC m=+82.185834968 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465717 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465728 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.465760 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.481880 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:01:23.151106646 +0000 UTC Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.525631 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.525804 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.525854 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.526064 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.544727 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.562420 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.567989 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568066 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.568113 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.577205 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.593366 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.613397 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.636244 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.648805 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670854 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670876 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670903 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.670927 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.676313 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.692465 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.692606 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: E0121 06:36:00.692659 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:36:32.692642776 +0000 UTC m=+82.489002469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.699246 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.715503 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.732031 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.759078 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.773992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.774118 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.775791 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.796527 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.820379 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.842183 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.867782 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:00Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.876949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877216 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877352 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.877714 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.980445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.980749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.980939 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.981082 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:00 crc kubenswrapper[4913]: I0121 06:36:00.981214 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:00Z","lastTransitionTime":"2026-01-21T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.084706 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.188320 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291169 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291252 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.291264 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393534 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.393545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.483059 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:43:09.190514064 +0000 UTC Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496008 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.496087 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.525719 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.525724 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:01 crc kubenswrapper[4913]: E0121 06:36:01.525932 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:01 crc kubenswrapper[4913]: E0121 06:36:01.526038 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599631 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599703 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.599767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703494 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.703509 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806373 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.806465 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.863582 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.867048 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.867631 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.893521 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908420 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908446 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.908456 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:01Z","lastTransitionTime":"2026-01-21T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.911531 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.927164 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.942083 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.962693 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.980724 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:01 crc kubenswrapper[4913]: I0121 06:36:01.996280 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:01Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010674 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010720 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.010742 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.014887 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.032822 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.047715 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.060170 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.072703 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.084729 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.101050 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.112994 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113003 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113221 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.113240 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.125302 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.134860 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215569 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215682 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.215707 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.229955 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.230091 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.250319 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255168 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255199 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.255220 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.275063 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.279366 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.293780 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299098 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299166 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.299176 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.320194 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324861 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324898 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.324923 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.338074 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.338183 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339796 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339845 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.339855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.442338 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.484020 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:07:50.210943591 +0000 UTC Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.525799 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.525812 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.526042 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.526162 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.545447 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649412 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649651 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.649672 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.753454 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.856976 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857071 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.857123 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.875422 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.876097 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/1.log" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.879938 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" exitCode=1 Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.879996 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.880045 4913 scope.go:117] "RemoveContainer" containerID="a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.882813 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:02 crc kubenswrapper[4913]: E0121 06:36:02.883331 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.903045 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.934847 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c5792149e5c75cdbdbbdb6cedb25777d20cefecd2df828e72fa921fbbcc8e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:35:44Z\\\",\\\"message\\\":\\\"443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8b82f026-5975-4a1b-bb18-08d5d51147ec}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 06:35:43.729965 6400 lb_config.go:1031] Cluster endpoints for openshift-service-ca-operator/metrics for network=default are: map[]\\\\nI0121 06:35:43.729992 6400 services_controller.go:443] Built service openshift-service-ca-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.40\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 06:35:43.730019 6400 services_controller.go:444] Built service openshift-service-ca-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF0121 06:35:43.730021 6400 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.952231 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961192 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.961315 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:02Z","lastTransitionTime":"2026-01-21T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:02 crc kubenswrapper[4913]: I0121 06:36:02.988210 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.012856 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.031810 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.054198 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065354 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.065513 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.075562 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.098642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.121726 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.151052 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.169133 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.169408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.170251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.170306 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.170327 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.178537 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.203431 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.220131 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.242315 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.265002 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273396 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273465 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273486 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.273499 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.283448 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377656 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.377707 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481369 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481477 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.481536 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.484674 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:36:41.774589044 +0000 UTC Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.526291 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.526404 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:03 crc kubenswrapper[4913]: E0121 06:36:03.526508 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:03 crc kubenswrapper[4913]: E0121 06:36:03.526639 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584662 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.584715 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.687926 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797666 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.797729 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.886780 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.891842 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:03 crc kubenswrapper[4913]: E0121 06:36:03.892099 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899920 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899950 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.899982 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.900002 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:03Z","lastTransitionTime":"2026-01-21T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.914399 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.932508 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.951083 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.978204 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:03 crc kubenswrapper[4913]: I0121 06:36:03.998196 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.003726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.003986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.004133 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.004257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.004384 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.022072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.038465 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.070374 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.089528 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107690 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.107727 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.122936 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.140311 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.156058 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.171099 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.190269 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.208072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210138 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.210239 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.227663 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.248359 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313752 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313770 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313793 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.313811 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416666 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416714 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.416734 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.484842 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:18:42.927038033 +0000 UTC Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519773 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519790 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519811 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.519829 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.526198 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.526205 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:04 crc kubenswrapper[4913]: E0121 06:36:04.526387 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:04 crc kubenswrapper[4913]: E0121 06:36:04.526522 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.622957 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623018 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.623079 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.637879 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.651661 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.656904 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.678175 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.699402 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.724725 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726623 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.726757 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.746033 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.765455 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.782626 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.797940 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.812392 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830280 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830358 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.830388 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.831510 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.842757 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.861876 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.876042 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.891079 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.904941 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933702 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.933740 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:04Z","lastTransitionTime":"2026-01-21T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.934522 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:04 crc kubenswrapper[4913]: I0121 06:36:04.950077 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.036898 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.036981 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.036997 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.037021 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.037039 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140922 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.140960 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243384 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.243569 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346503 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.346726 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.449774 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.485665 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 01:32:52.012375162 +0000 UTC Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.526143 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.526152 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:05 crc kubenswrapper[4913]: E0121 06:36:05.526364 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:05 crc kubenswrapper[4913]: E0121 06:36:05.526501 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553014 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553083 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.553154 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.656362 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.758979 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.759084 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862632 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.862739 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966260 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966280 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:05 crc kubenswrapper[4913]: I0121 06:36:05.966328 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:05Z","lastTransitionTime":"2026-01-21T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069410 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069517 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.069535 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.172973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.173128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.276757 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380632 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.380719 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.483974 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484024 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.484068 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.486654 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:54:25.43018529 +0000 UTC Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.526280 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.526351 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:06 crc kubenswrapper[4913]: E0121 06:36:06.526505 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:06 crc kubenswrapper[4913]: E0121 06:36:06.526678 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587425 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.587455 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.691440 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.794918 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.794969 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.794988 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.795017 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.795036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897781 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897881 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897914 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:06 crc kubenswrapper[4913]: I0121 06:36:06.897936 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:06Z","lastTransitionTime":"2026-01-21T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001118 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.001128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103557 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.103628 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206520 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206611 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206622 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.206663 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310186 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310209 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.310226 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413504 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413614 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.413648 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.487716 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:55:46.184774315 +0000 UTC Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516500 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516631 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.516717 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.525856 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:07 crc kubenswrapper[4913]: E0121 06:36:07.526002 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.526057 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:07 crc kubenswrapper[4913]: E0121 06:36:07.526414 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620236 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620259 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620289 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.620311 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723764 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723808 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723821 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.723851 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827077 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.827128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930252 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:07 crc kubenswrapper[4913]: I0121 06:36:07.930400 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:07Z","lastTransitionTime":"2026-01-21T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.033937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.033988 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.034000 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.034018 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.034029 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.136938 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.136998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.137015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.137040 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.137058 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.240898 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343916 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.343992 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.344005 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447395 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.447413 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.487880 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 09:37:43.357887901 +0000 UTC Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.525284 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:08 crc kubenswrapper[4913]: E0121 06:36:08.525403 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.525507 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:08 crc kubenswrapper[4913]: E0121 06:36:08.525857 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549813 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549850 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.549872 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652584 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652659 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.652679 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.755804 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859419 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859473 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.859499 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962455 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:08 crc kubenswrapper[4913]: I0121 06:36:08.962534 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:08Z","lastTransitionTime":"2026-01-21T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065620 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.065633 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.167912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.167959 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.167971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.168009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.168032 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.271727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272373 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.272429 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377465 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.377685 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.480932 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.480980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.480996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.481019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.481036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.488320 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:42:23.999565651 +0000 UTC Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.526238 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.526238 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:09 crc kubenswrapper[4913]: E0121 06:36:09.526389 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:09 crc kubenswrapper[4913]: E0121 06:36:09.526468 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584024 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584085 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.584138 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688137 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688162 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688190 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.688215 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791572 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791640 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791674 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.791686 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894955 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.894978 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998630 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998671 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:09 crc kubenswrapper[4913]: I0121 06:36:09.998691 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:09Z","lastTransitionTime":"2026-01-21T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102744 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102868 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.102888 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207219 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207318 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.207330 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.309863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.309944 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.309963 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.310009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.310036 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412245 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.412278 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.488501 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:58:39.192825922 +0000 UTC Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515098 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515441 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.515641 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.525779 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:10 crc kubenswrapper[4913]: E0121 06:36:10.526005 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.526144 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:10 crc kubenswrapper[4913]: E0121 06:36:10.526348 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.546118 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.566378 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.591803 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.609443 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618192 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618234 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618248 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.618258 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.629661 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.649306 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.666575 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.678893 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.693666 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.706068 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.719992 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.721894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722096 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722316 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722474 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.722656 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.734091 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.747393 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.771144 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.785651 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.802102 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.816714 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825407 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.825998 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.835801 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:10Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928818 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.928913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:10 crc kubenswrapper[4913]: I0121 06:36:10.929123 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:10Z","lastTransitionTime":"2026-01-21T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.031875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032148 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032357 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.032437 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.134672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.134966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.135050 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.135130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.135194 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237623 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.237682 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.339819 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442399 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442472 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.442527 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.488835 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:49:20.228411414 +0000 UTC Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.525578 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.525637 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:11 crc kubenswrapper[4913]: E0121 06:36:11.525717 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:11 crc kubenswrapper[4913]: E0121 06:36:11.525814 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.544891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647239 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.647326 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751403 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.751496 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854046 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854057 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854072 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.854081 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956430 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956518 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956530 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:11 crc kubenswrapper[4913]: I0121 06:36:11.956561 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:11Z","lastTransitionTime":"2026-01-21T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059454 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.059498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.161926 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.264318 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367256 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367325 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.367365 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.489765 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:27:09.543284341 +0000 UTC Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491549 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491578 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491601 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491614 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.491623 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.525405 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.525448 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.525615 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.525968 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.586700 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.606495 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.611422 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.630952 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635296 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.635369 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.651328 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654610 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.654686 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.672413 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676881 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676895 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.676924 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.689245 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:12 crc kubenswrapper[4913]: E0121 06:36:12.689373 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691070 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.691102 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.793465 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897529 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:12 crc kubenswrapper[4913]: I0121 06:36:12.897619 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:12Z","lastTransitionTime":"2026-01-21T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000200 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000262 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000311 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.000331 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.103884 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206001 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.206069 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.308709 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.411609 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.490254 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:43:18.611832908 +0000 UTC Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514396 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514436 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514449 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.514479 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.525879 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.526007 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:13 crc kubenswrapper[4913]: E0121 06:36:13.526118 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:13 crc kubenswrapper[4913]: E0121 06:36:13.526271 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.616891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.616989 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.617011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.617034 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.617050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719408 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719446 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.719463 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.822886 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:13 crc kubenswrapper[4913]: I0121 06:36:13.925567 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:13Z","lastTransitionTime":"2026-01-21T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028684 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028762 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.028775 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132155 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132204 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.132245 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235143 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235540 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235728 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.235883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.236014 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.338732 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339276 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.339498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.442229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.442559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.442771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.443112 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.443401 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.490782 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:36:55.421048047 +0000 UTC Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.526113 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.526212 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:14 crc kubenswrapper[4913]: E0121 06:36:14.526254 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:14 crc kubenswrapper[4913]: E0121 06:36:14.526427 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.546881 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.649693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650200 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.650284 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.753928 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.753985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.754003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.754026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.754044 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.856360 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958253 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:14 crc kubenswrapper[4913]: I0121 06:36:14.958313 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:14Z","lastTransitionTime":"2026-01-21T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060343 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060645 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060834 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.060919 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163790 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.163877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.266906 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267269 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267385 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.267470 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369939 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369983 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.369997 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.370006 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472530 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.472572 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.491672 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:08:53.693701998 +0000 UTC Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.526191 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.526201 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:15 crc kubenswrapper[4913]: E0121 06:36:15.526358 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:15 crc kubenswrapper[4913]: E0121 06:36:15.526461 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575121 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.575164 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677797 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677807 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677826 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.677837 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781208 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781270 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.781307 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.885197 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940092 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/0.log" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940168 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" containerID="9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd" exitCode=1 Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940218 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerDied","Data":"9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd"} Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.940934 4913 scope.go:117] "RemoveContainer" containerID="9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.963088 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.979053 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:15Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988200 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:15 crc kubenswrapper[4913]: I0121 06:36:15.988418 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:15Z","lastTransitionTime":"2026-01-21T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.012364 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.026943 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.038182 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.049155 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.065502 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.079458 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091245 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091263 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.091276 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.094322 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.110489 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.123091 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.136850 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.150769 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.165425 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.182549 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193918 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193971 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.193992 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.195451 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.209688 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.219490 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296702 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.296754 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.399756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400097 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400177 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.400323 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.492706 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:19:45.109034139 +0000 UTC Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.503387 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.525690 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:16 crc kubenswrapper[4913]: E0121 06:36:16.525801 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.525697 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:16 crc kubenswrapper[4913]: E0121 06:36:16.526140 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.526441 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:16 crc kubenswrapper[4913]: E0121 06:36:16.526631 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605815 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605823 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.605843 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708880 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708906 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.708915 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811255 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811267 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.811299 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913792 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.913952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.914094 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:16Z","lastTransitionTime":"2026-01-21T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.945055 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/0.log" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.945101 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6"} Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.959322 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.972606 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.984991 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:16 crc kubenswrapper[4913]: I0121 06:36:16.997037 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:16Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.009377 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.016331 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.025007 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.046467 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.056886 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.068976 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.083003 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.095062 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.104076 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.112751 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118186 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118245 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118260 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.118270 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.128636 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.140157 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.156382 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.168126 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.178633 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:17Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.220568 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.322953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.323035 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.425975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.426063 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.493005 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:22:35.551264469 +0000 UTC Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.525436 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.525495 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:17 crc kubenswrapper[4913]: E0121 06:36:17.525750 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:17 crc kubenswrapper[4913]: E0121 06:36:17.525825 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528731 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.528800 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630933 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.630951 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733095 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733117 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.733130 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835389 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.835427 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937630 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937642 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:17 crc kubenswrapper[4913]: I0121 06:36:17.937673 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:17Z","lastTransitionTime":"2026-01-21T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040386 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040691 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.040957 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.142944 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.142985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.142995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.143012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.143022 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.245577 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348710 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348795 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.348838 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.450999 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451072 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451096 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.451113 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.493302 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:29:33.778196578 +0000 UTC Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.526372 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.526399 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:18 crc kubenswrapper[4913]: E0121 06:36:18.526492 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:18 crc kubenswrapper[4913]: E0121 06:36:18.526618 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553657 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.553682 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655613 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.655702 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759278 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.759320 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861610 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861619 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861632 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.861641 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963248 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963294 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963306 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963318 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:18 crc kubenswrapper[4913]: I0121 06:36:18.963329 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:18Z","lastTransitionTime":"2026-01-21T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066496 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.066612 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169314 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.169324 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.272131 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374613 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374638 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.374655 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477070 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477080 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.477101 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.493698 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:54:01.350311845 +0000 UTC Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.526170 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:19 crc kubenswrapper[4913]: E0121 06:36:19.526283 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.526293 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:19 crc kubenswrapper[4913]: E0121 06:36:19.526576 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579720 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579802 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579829 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.579847 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.682692 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785226 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785409 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.785658 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.888482 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991015 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:19 crc kubenswrapper[4913]: I0121 06:36:19.991829 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:19Z","lastTransitionTime":"2026-01-21T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.094511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.094865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.094956 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.095079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.095168 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198156 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198196 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198206 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.198232 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.300664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.301012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.301359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.301701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.302005 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405078 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405115 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405136 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.405146 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.494767 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:04:28.021630153 +0000 UTC Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507803 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507811 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507826 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.507840 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.526247 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:20 crc kubenswrapper[4913]: E0121 06:36:20.526382 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.526529 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:20 crc kubenswrapper[4913]: E0121 06:36:20.526619 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.541832 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.566897 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.581100 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.593548 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.604875 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609222 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609256 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.609287 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.623103 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.635256 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.647642 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.660608 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.674103 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.685721 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.698721 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.709365 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711475 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711537 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.711548 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.717652 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.737510 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.748923 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.759552 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.768383 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813410 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.813449 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915665 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915677 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:20 crc kubenswrapper[4913]: I0121 06:36:20.915706 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:20Z","lastTransitionTime":"2026-01-21T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017621 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017639 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017659 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.017675 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120606 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120619 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.120645 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222415 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222470 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222488 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.222524 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324490 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324633 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.324646 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426917 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426937 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.426948 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.495526 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:15:26.266223202 +0000 UTC Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.525300 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.525339 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:21 crc kubenswrapper[4913]: E0121 06:36:21.525430 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:21 crc kubenswrapper[4913]: E0121 06:36:21.525526 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529465 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.529544 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631480 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.631559 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.733733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.733985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.734054 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.734114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.734188 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.836844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837068 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837146 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837459 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.837526 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939568 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:21 crc kubenswrapper[4913]: I0121 06:36:21.939619 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:21Z","lastTransitionTime":"2026-01-21T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041909 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.041973 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.144780 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145199 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145385 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.145736 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248723 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248752 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.248767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351636 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351675 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.351691 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454500 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.454530 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.496060 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:21:04.792374448 +0000 UTC Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.525364 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.525401 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.525514 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.525624 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557220 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.557261 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.659640 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694457 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694494 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.694765 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.712431 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715738 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715766 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715774 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715790 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.715799 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.727729 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.732951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733021 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.733050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.750353 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753772 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753782 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753797 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.753809 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.767006 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.771566 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.771911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.772123 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.772283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.772453 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.787738 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:22 crc kubenswrapper[4913]: E0121 06:36:22.787884 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789297 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.789367 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.892919 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.893322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.893857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.894104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.894340 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.996977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:22 crc kubenswrapper[4913]: I0121 06:36:22.997058 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:22Z","lastTransitionTime":"2026-01-21T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100392 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100448 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100911 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.100925 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.203857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204032 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.204091 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.306990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307026 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.307061 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409618 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.409727 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.496648 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:43:23.457925833 +0000 UTC Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512807 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512885 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512935 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.512956 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.526068 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.526115 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:23 crc kubenswrapper[4913]: E0121 06:36:23.526232 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:23 crc kubenswrapper[4913]: E0121 06:36:23.526358 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616090 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.616189 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721345 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721395 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.721412 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824624 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824695 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.824759 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928139 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928219 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928278 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:23 crc kubenswrapper[4913]: I0121 06:36:23.928304 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:23Z","lastTransitionTime":"2026-01-21T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.030888 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.030962 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.030980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.031003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.031020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.133742 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236371 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236411 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236425 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.236450 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.339882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.340806 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443253 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443313 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.443323 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.497361 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:24:35.826473691 +0000 UTC Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.525984 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.525987 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:24 crc kubenswrapper[4913]: E0121 06:36:24.526227 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:24 crc kubenswrapper[4913]: E0121 06:36:24.526109 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.545941 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.647954 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.647986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.647994 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.648007 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.648018 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750302 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750331 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750351 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.750359 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852903 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.852916 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955779 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955800 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:24 crc kubenswrapper[4913]: I0121 06:36:24.955817 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:24Z","lastTransitionTime":"2026-01-21T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058515 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.058576 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161284 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.161794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265783 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265840 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.265878 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.368991 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369040 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369052 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.369081 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472383 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472522 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.472545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.497939 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:08:59.95358194 +0000 UTC Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.525288 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:25 crc kubenswrapper[4913]: E0121 06:36:25.525478 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.525288 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:25 crc kubenswrapper[4913]: E0121 06:36:25.526236 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575365 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575381 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575405 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.575423 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.678990 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.679009 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781720 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.781762 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.884976 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885035 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.885092 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.987974 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988022 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:25 crc kubenswrapper[4913]: I0121 06:36:25.988077 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:25Z","lastTransitionTime":"2026-01-21T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090638 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090658 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.090671 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193691 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.193758 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297207 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.297247 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398811 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398836 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398855 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.398864 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.498787 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:08:36.040956554 +0000 UTC Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502384 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502447 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.502485 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.526075 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.526112 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:26 crc kubenswrapper[4913]: E0121 06:36:26.526304 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:26 crc kubenswrapper[4913]: E0121 06:36:26.526380 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605369 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605503 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.605521 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709129 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709177 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.709243 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813731 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813796 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.813856 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:26 crc kubenswrapper[4913]: I0121 06:36:26.916921 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:26Z","lastTransitionTime":"2026-01-21T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020352 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.020392 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123831 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.123891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227374 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.227393 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330823 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330856 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.330877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433628 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.433645 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.499995 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:00:45.517965816 +0000 UTC Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.525900 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.526009 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:27 crc kubenswrapper[4913]: E0121 06:36:27.526153 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:27 crc kubenswrapper[4913]: E0121 06:36:27.526550 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537024 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537204 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537229 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.537249 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.543422 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640377 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.640415 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743142 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743168 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.743179 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.845958 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846000 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846016 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846037 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.846050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949141 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949165 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:27 crc kubenswrapper[4913]: I0121 06:36:27.949183 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:27Z","lastTransitionTime":"2026-01-21T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052081 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.052132 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155553 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.155670 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258492 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.258537 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361339 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361462 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.361479 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465773 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465876 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465904 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.465920 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.500736 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:04:05.213587392 +0000 UTC Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.527567 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.527686 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:28 crc kubenswrapper[4913]: E0121 06:36:28.527964 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:28 crc kubenswrapper[4913]: E0121 06:36:28.528396 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568263 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.568275 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671445 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671527 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.671575 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774551 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774689 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774715 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774742 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.774766 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877430 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877492 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877514 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.877563 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980825 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:28 crc kubenswrapper[4913]: I0121 06:36:28.980931 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:28Z","lastTransitionTime":"2026-01-21T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083758 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083825 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083880 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.083901 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.186882 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289497 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289571 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289630 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.289648 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393235 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393300 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.393360 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497210 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.497233 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.501457 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:06:11.245681834 +0000 UTC Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.526330 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:29 crc kubenswrapper[4913]: E0121 06:36:29.526538 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.526358 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:29 crc kubenswrapper[4913]: E0121 06:36:29.527196 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.527516 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601363 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.601420 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.704984 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705044 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705065 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.705120 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807662 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807693 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.807704 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910349 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:29 crc kubenswrapper[4913]: I0121 06:36:29.910386 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:29Z","lastTransitionTime":"2026-01-21T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012641 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012663 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012691 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.012714 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115673 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115690 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115712 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.115729 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218555 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.218819 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.321855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424585 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424682 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424728 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.424752 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.501652 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:26:10.301602571 +0000 UTC Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.525744 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:30 crc kubenswrapper[4913]: E0121 06:36:30.525897 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.525988 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:30 crc kubenswrapper[4913]: E0121 06:36:30.526193 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527747 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527816 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527841 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.527859 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.543298 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.562387 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.579377 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.596357 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.612700 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.629738 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630004 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630062 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.630134 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.649263 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.663147 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.684210 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.704736 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.725200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732791 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.732855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.740929 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.758075 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.776756 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.791729 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.806191 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.834050 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.835950 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836011 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836030 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.836076 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.849978 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.883553 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:30Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938762 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:30 crc kubenswrapper[4913]: I0121 06:36:30.938779 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:30Z","lastTransitionTime":"2026-01-21T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.006689 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.010825 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.011628 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.032769 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042336 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042372 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.042410 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.045649 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.064622 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.083698 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.095632 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.111200 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.128173 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.143886 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144579 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144607 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144624 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.144635 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.153947 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.170806 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.184310 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.197521 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.210300 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.224203 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248225 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.248243 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.254661 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.273425 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.288729 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.305787 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.336740 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:31Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351170 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351188 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351214 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.351233 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454071 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454117 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454136 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.454175 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.502676 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:49:13.070990405 +0000 UTC Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.526201 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.526295 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:31 crc kubenswrapper[4913]: E0121 06:36:31.526344 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:31 crc kubenswrapper[4913]: E0121 06:36:31.526518 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557213 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557257 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.557274 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659644 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659706 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.659735 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761700 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761730 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761759 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.761767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864232 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864309 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864341 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.864354 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966827 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:31 crc kubenswrapper[4913]: I0121 06:36:31.966854 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:31Z","lastTransitionTime":"2026-01-21T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069228 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069264 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069274 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.069298 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172340 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172387 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172407 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172430 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.172480 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274874 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274903 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.274913 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.374264 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.374448 4913 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.374545 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.37452428 +0000 UTC m=+146.170884043 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378051 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378110 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378157 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.378184 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475146 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475293 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.475267167 +0000 UTC m=+146.271626870 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475455 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475533 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475758 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475793 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475811 4913 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475814 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475872 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.475856924 +0000 UTC m=+146.272216627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475894 4913 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.475924 4913 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.475957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.476024 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.475990417 +0000 UTC m=+146.272350130 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.476077 4913 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.476137 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.47612115 +0000 UTC m=+146.272480863 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481358 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.481478 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.503621 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:56:57.545761768 +0000 UTC Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.525377 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.525520 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.525742 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.525876 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583878 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583887 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.583908 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686846 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686901 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.686921 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.780038 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.780211 4913 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.780265 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs podName:60ed8982-ee20-4330-861f-61509c39bbe7 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.780248766 +0000 UTC m=+146.576608449 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs") pod "network-metrics-daemon-wfcsc" (UID: "60ed8982-ee20-4330-861f-61509c39bbe7") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790191 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790268 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.790282 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.892914 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.892975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.892986 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.893002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.893013 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946753 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946839 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946867 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946898 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.946922 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.967051 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971776 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971862 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.971878 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:32 crc kubenswrapper[4913]: E0121 06:36:32.994972 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998846 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:32 crc kubenswrapper[4913]: I0121 06:36:32.998855 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:32Z","lastTransitionTime":"2026-01-21T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.015658 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.017628 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.018249 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/2.log" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020096 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020143 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020159 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.020195 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.022189 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" exitCode=1 Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.022254 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.022313 4913 scope.go:117] "RemoveContainer" containerID="cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.023947 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.024394 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.041712 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.044705 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047269 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047293 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.047312 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.062216 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.066388 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.066575 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068611 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.068626 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.084861 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.102964 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.117746 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.133388 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.150664 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.162036 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171512 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171560 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171575 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.171606 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.173180 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.193311 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"shift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 06:36:32.066654 6993 services_controller.go:452] Built service openshift-authentication/oauth-openshift per-node LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066664 6993 services_controller.go:453] Built service openshift-authentication/oauth-openshift template LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066674 6993 services_controller.go:454] Service openshift-authentication/oauth-openshift for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0121 06:36:32.066641 6993 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.208239 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.241304 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.261914 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274249 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274355 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.274372 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.283257 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.302911 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.321292 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.340530 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.355985 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.375414 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377014 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377067 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377086 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.377137 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479343 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479370 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479403 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.479425 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.504112 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:10:46.863872613 +0000 UTC Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.525374 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.525393 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.525634 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:33 crc kubenswrapper[4913]: E0121 06:36:33.525788 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582478 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582543 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.582644 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.685947 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686281 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.686341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789855 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789901 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789914 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.789942 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893134 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893152 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893174 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.893191 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:33 crc kubenswrapper[4913]: I0121 06:36:33.996566 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:33Z","lastTransitionTime":"2026-01-21T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.027902 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105699 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105820 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.105882 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208681 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.208708 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311458 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311468 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311483 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.311493 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.415485 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.505153 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:10:05.103975164 +0000 UTC Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519786 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519815 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.519827 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.526015 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.526041 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:34 crc kubenswrapper[4913]: E0121 06:36:34.526134 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:34 crc kubenswrapper[4913]: E0121 06:36:34.526362 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622858 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622907 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.622949 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725367 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725456 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.725466 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827807 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827823 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.827863 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930443 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930507 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930528 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:34 crc kubenswrapper[4913]: I0121 06:36:34.930569 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:34Z","lastTransitionTime":"2026-01-21T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033794 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033874 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033906 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.033943 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137361 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137426 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137498 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.137522 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240666 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240751 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240774 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.240791 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343292 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343335 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343347 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.343357 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.445973 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446027 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446061 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.446107 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.505707 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 04:58:42.202589035 +0000 UTC Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.525366 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.525363 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:35 crc kubenswrapper[4913]: E0121 06:36:35.525902 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:35 crc kubenswrapper[4913]: E0121 06:36:35.526005 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548729 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548805 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548829 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548861 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.548883 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651549 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651622 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.651665 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754570 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.754668 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858532 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858654 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.858840 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.962172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.962784 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.963012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.963325 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:35 crc kubenswrapper[4913]: I0121 06:36:35.963469 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:35Z","lastTransitionTime":"2026-01-21T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066759 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066851 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.066900 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169431 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169497 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169520 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.169560 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272842 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272863 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272889 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.272907 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376377 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376416 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.376433 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479067 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479108 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.479124 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.506496 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 00:18:02.029288881 +0000 UTC Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.526305 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.526324 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:36 crc kubenswrapper[4913]: E0121 06:36:36.526550 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:36 crc kubenswrapper[4913]: E0121 06:36:36.526700 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582640 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582705 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582724 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582745 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.582764 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686131 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.686148 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.788852 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789365 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789447 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.789509 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893207 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893576 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893761 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.893899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.894050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.996970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997033 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997074 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:36 crc kubenswrapper[4913]: I0121 06:36:36.997096 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:36Z","lastTransitionTime":"2026-01-21T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106132 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106184 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106202 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.106255 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.209716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210055 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210389 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.210568 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.313933 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.313995 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.314012 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.314036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.314053 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416926 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416949 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.416966 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.507486 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:59:26.659678856 +0000 UTC Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520625 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520643 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.520683 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.525822 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:37 crc kubenswrapper[4913]: E0121 06:36:37.526036 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.526172 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:37 crc kubenswrapper[4913]: E0121 06:36:37.526411 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.623614 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.623734 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.623760 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.624243 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.624529 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.728627 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832043 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832095 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832111 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832135 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.832152 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935118 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935197 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:37 crc kubenswrapper[4913]: I0121 06:36:37.935256 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:37Z","lastTransitionTime":"2026-01-21T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037915 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037933 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037957 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.037976 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.140985 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141038 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.141099 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244158 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244214 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.244237 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347050 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347107 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347129 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347156 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.347176 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450124 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450201 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450219 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.450264 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.508186 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:59:31.168726376 +0000 UTC Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.526170 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.526236 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:38 crc kubenswrapper[4913]: E0121 06:36:38.526462 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:38 crc kubenswrapper[4913]: E0121 06:36:38.526627 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552473 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552541 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552629 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.552656 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.654920 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.654980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.654996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.655019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.655038 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758158 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758210 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758227 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758250 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.758267 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.861966 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862042 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862065 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862093 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.862116 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965531 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965548 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:38 crc kubenswrapper[4913]: I0121 06:36:38.965632 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:38Z","lastTransitionTime":"2026-01-21T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068092 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068223 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.068239 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170160 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.170214 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272458 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272509 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272554 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.272566 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.375975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376031 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.376087 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479053 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479121 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479176 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.479199 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.508686 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:30:29.612798707 +0000 UTC Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.526114 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.526228 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:39 crc kubenswrapper[4913]: E0121 06:36:39.526316 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:39 crc kubenswrapper[4913]: E0121 06:36:39.526725 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582305 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582321 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582342 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.582358 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.685890 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.685964 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.685987 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.686013 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.686034 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788386 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788501 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.788518 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890804 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890871 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.890947 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994311 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994348 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:39 crc kubenswrapper[4913]: I0121 06:36:39.994364 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:39Z","lastTransitionTime":"2026-01-21T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097479 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097577 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097648 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.097668 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200510 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200617 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200634 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.200664 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303838 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303887 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303902 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303921 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.303933 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407577 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407635 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407660 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.407678 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.509262 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 02:59:54.75291436 +0000 UTC Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511060 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511076 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511100 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.511116 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.525726 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.525769 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:40 crc kubenswrapper[4913]: E0121 06:36:40.525894 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:40 crc kubenswrapper[4913]: E0121 06:36:40.525997 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.548369 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.570050 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.585215 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621383 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621468 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621491 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.621543 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.654409 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.677445 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.692538 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.705072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723755 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.723771 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.727072 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc82b195f9c3149f308224a6362b5624cf812772b4437b4e44742e2974230ae0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:02Z\\\",\\\"message\\\":\\\"vent handler 8\\\\nI0121 06:36:02.025509 6591 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.025689 6591 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0121 06:36:02.029730 6591 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 06:36:02.029760 6591 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 06:36:02.029899 6591 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 06:36:02.029910 6591 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 06:36:02.029966 6591 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 06:36:02.029998 6591 factory.go:656] Stopping watch factory\\\\nI0121 06:36:02.030018 6591 ovnkube.go:599] Stopped ovnkube\\\\nI0121 06:36:02.030058 6591 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 06:36:02.030071 6591 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 06:36:02.030080 6591 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 06:36:02.030088 6591 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 06:36:02.030096 6591 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 06:36:02.030118 6591 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 06:36:02.030199 6591 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"shift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 06:36:32.066654 6993 services_controller.go:452] Built service openshift-authentication/oauth-openshift per-node LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066664 6993 services_controller.go:453] Built service openshift-authentication/oauth-openshift template LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066674 6993 services_controller.go:454] Service openshift-authentication/oauth-openshift for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0121 06:36:32.066641 6993 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.744330 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.755314 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.767812 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.783136 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.796967 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.806621 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.816908 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825573 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825612 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.825625 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.832888 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.847071 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.858490 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.872035 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:40Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928563 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928670 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928696 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928719 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:40 crc kubenswrapper[4913]: I0121 06:36:40.928736 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:40Z","lastTransitionTime":"2026-01-21T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032707 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032785 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.032827 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.139892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.139977 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.140001 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.140029 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.140050 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.242956 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243020 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243069 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.243093 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345751 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345873 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.345897 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448187 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448267 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448322 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.448346 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.510128 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:35:51.118793718 +0000 UTC Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.525679 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:41 crc kubenswrapper[4913]: E0121 06:36:41.525866 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.525692 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:41 crc kubenswrapper[4913]: E0121 06:36:41.525997 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552214 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552266 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552286 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552310 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.552330 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656144 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656221 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656240 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656265 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.656287 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759290 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759375 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759401 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.759420 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880285 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880492 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880626 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.880684 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984427 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984526 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984545 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:41 crc kubenswrapper[4913]: I0121 06:36:41.984726 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:41Z","lastTransitionTime":"2026-01-21T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087919 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087934 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.087966 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191615 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191718 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191733 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191759 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.191777 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294922 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294932 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.294963 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397193 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397248 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397264 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397289 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.397310 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500320 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500397 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.500475 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.510548 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:36:35.989284474 +0000 UTC Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.526070 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.526119 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:42 crc kubenswrapper[4913]: E0121 06:36:42.526237 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:42 crc kubenswrapper[4913]: E0121 06:36:42.526352 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603553 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603650 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603674 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603704 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.603727 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706735 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706831 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706847 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.706879 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810704 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810778 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810795 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810822 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.810846 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913356 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913452 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913480 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:42 crc kubenswrapper[4913]: I0121 06:36:42.913501 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:42Z","lastTransitionTime":"2026-01-21T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017058 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017511 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017582 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.017794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121329 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121406 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121429 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.121442 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224948 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224970 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.224998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.225020 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.327997 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328063 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328080 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328109 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.328128 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359380 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359398 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359422 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.359439 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.381200 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386001 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386075 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.386120 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.405921 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411212 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411291 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.411373 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.431664 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436789 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436818 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436850 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.436877 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.457451 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467017 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467246 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.467327 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.485319 4913 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"dc2e078c-6a92-4a2e-a56c-2176218bd01c\\\",\\\"systemUUID\\\":\\\"7037ee30-9526-47b8-97e2-90db93aaec61\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.485556 4913 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.487936 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.487991 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.488009 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.488033 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.488049 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.511228 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:44:50.722343787 +0000 UTC Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.525725 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.525766 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.525885 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:43 crc kubenswrapper[4913]: E0121 06:36:43.526041 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590620 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590721 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590748 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.590768 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.693482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.693830 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.693988 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.694145 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.694288 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797419 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797499 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.797545 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900182 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900243 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:43 crc kubenswrapper[4913]: I0121 06:36:43.900302 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:43Z","lastTransitionTime":"2026-01-21T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003461 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003567 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003629 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.003648 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106238 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106301 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106324 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.106378 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209388 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209440 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209456 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209481 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.209498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312854 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312910 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.312971 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456434 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.456493 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.512072 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:58:08.512731489 +0000 UTC Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.526447 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.526453 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:44 crc kubenswrapper[4913]: E0121 06:36:44.526719 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:44 crc kubenswrapper[4913]: E0121 06:36:44.526859 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559815 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559873 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559891 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559915 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.559933 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.663357 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.663754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.663896 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.664140 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.664354 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768505 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768685 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768853 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.768977 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872041 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872088 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872104 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.872142 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974827 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974877 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:44 crc kubenswrapper[4913]: I0121 06:36:44.974901 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:44Z","lastTransitionTime":"2026-01-21T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078028 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078084 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078127 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.078144 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.181574 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284344 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284369 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.284412 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387476 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387559 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387578 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387633 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.387652 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489467 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489557 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489581 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.489636 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.512909 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:10:31.659767836 +0000 UTC Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.525492 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.525554 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:45 crc kubenswrapper[4913]: E0121 06:36:45.525699 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:45 crc kubenswrapper[4913]: E0121 06:36:45.525777 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592008 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592079 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592099 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592122 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.592140 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.694996 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695271 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695363 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695451 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.695535 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798758 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798870 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798892 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.798911 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901089 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901097 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901110 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:45 crc kubenswrapper[4913]: I0121 06:36:45.901118 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:45Z","lastTransitionTime":"2026-01-21T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003464 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003535 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003557 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.003576 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106812 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106924 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.106944 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210094 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210105 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210125 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.210138 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313036 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313103 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313122 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313146 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.313166 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415806 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415833 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.415845 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.513328 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:21:01.932002805 +0000 UTC Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.520961 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521027 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521045 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521073 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.521091 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.525326 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.525422 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:46 crc kubenswrapper[4913]: E0121 06:36:46.525518 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:46 crc kubenswrapper[4913]: E0121 06:36:46.525707 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624678 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624757 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.624805 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728764 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728841 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728865 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.728884 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831382 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831487 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.831508 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934102 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934180 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934233 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:46 crc kubenswrapper[4913]: I0121 06:36:46.934272 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:46Z","lastTransitionTime":"2026-01-21T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036756 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036792 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036801 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036814 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.036824 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139506 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139596 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.139606 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241849 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241912 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.241968 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.343879 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.343953 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.343975 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.344002 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.344026 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446687 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446758 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.446793 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.513938 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:52:20.229302984 +0000 UTC Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.526407 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.526424 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:47 crc kubenswrapper[4913]: E0121 06:36:47.526935 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:47 crc kubenswrapper[4913]: E0121 06:36:47.527151 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.527298 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:36:47 crc kubenswrapper[4913]: E0121 06:36:47.527543 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.541902 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cpmwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"440ae0d9-f160-4f49-8b38-61c65d93eea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://954422a6a0d4eddcf1ff1d3daf6659d44b777731a445aeaafa9eebb57b4d9841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:32Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cpmwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549345 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549474 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549558 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549669 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.549777 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.556960 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://712990687a0d46e49dc74d1a16a7bc95751b332c5eb1bf91e25f3767cd61efe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.573176 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:15Z\\\",\\\"message\\\":\\\"2026-01-21T06:35:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda\\\\n2026-01-21T06:35:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7cb04411-9824-4bb0-870e-7d9b05d3ebda to /host/opt/cni/bin/\\\\n2026-01-21T06:35:30Z [verbose] multus-daemon started\\\\n2026-01-21T06:35:30Z [verbose] Readiness Indicator file check\\\\n2026-01-21T06:36:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:36:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c6s4k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.589660 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"941d5e91-9bf3-44dc-be69-629cb2516e7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://930ccc7139e1ab512896b2d3bc63f07e846e312e82638b5fcb3ebb15ba95384b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rlg6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sqswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.604164 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60ed8982-ee20-4330-861f-61509c39bbe7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4r7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wfcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.623368 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afe1e161-7227-48ff-824e-01d26e5c7218\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T06:36:32Z\\\",\\\"message\\\":\\\"shift-authentication/oauth-openshift\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.222\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 06:36:32.066654 6993 services_controller.go:452] Built service openshift-authentication/oauth-openshift per-node LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066664 6993 services_controller.go:453] Built service openshift-authentication/oauth-openshift template LB for network=default: []services.LB{}\\\\nI0121 06:36:32.066674 6993 services_controller.go:454] Service openshift-authentication/oauth-openshift for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0121 06:36:32.066641 6993 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:36:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8229\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-c7xtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.639497 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4aaba44f-534c-4eac-9250-e6e737a701bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56d8a876a155f3f1dcd785f944caeae17bb1a8637c28946b4bc64e158bcc2d83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c27e2de0bab4587697160e7eafe3f3d0454df7e8ad8ed52a6c05d92520b1e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dd5c2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-kkr2r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652019 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652346 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652500 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652688 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.652885 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.671514 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7453a8f4-5f0e-4257-a5f2-d148ae7c1c5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://208db8998fdb0416d67e751e0053503eb5c3a22a88920de339889e456ed571fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f07acab96cee4352727629a5626e614849a744d174d3383f2dff955dc2937a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2c26fe520a69bb8a56e168e71b07281f90a4e6dd0e2ac5a25848c1eb18de400\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79af0b4a2e7e287bb0190e71f99923d1cc067f818220b1cae70937d4ce32fa19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c70f52dfff18664d18e6fe67eec278715fc89f05d5892c60123269c01ac11cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a7a323b23bcc65519782cadc96ff954a7401a40ee4917032e75f5cc99987b61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4ddb04d1c851d4276947a33e7cab85ff7652b72c77ff3b8b75a2e11c521c0f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1594d3b3c7dfcdb9303a21a654d886286fcb2db8e1cefc76d35c5220f3f168b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.689989 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d22e9411-e16f-4b82-8326-75022a087d22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b64cac3e54218bcf593e7ab38e833756496b97d897bf3a1e043d1ab7c87d216b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74cef83063660e07bb4021c1ee2c81d68952ef6d6eb149520b0d9220b4bb511\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14bd46b4db2c899d31822a4807e7934892232afcc63c6f2f404e2d8d300652c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.711246 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.733777 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.750718 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755661 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755771 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755798 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.755815 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.763463 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.775986 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.791102 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.803677 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jpn7w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f47ec5-848c-4b9b-9828-8dd3ddb96a18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55a266f9b453a8ae717b566499a9d1c8d2b9a87d3485070997e34c4fd7b10785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpzhr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jpn7w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.827319 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e8f223b-fd76-4720-a29f-cb89654e33f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3da686e83c17493ce8b24301ce25cce302e6128890820d7dc0a9ba2629d2247\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfb2326828ed73de550b952f645c7f5dd381520e2bbac475462ae3b299a0e65c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcbca4ee6b6e12b1d770bbedd61b1a0d003d78702a4b5750781db7732c537243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84f1b8ea28fde8e7fa8d7102d216dd0514bd30da10a9b80759c69d4713e7c6b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24d6c2533d65bf1829f6e80afe8e264af0814e3b0fee6a55292d96b9fb43fb30\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://117145cdf4019f6a292b19970ffa124f017713ccf1d520a426fb12c5caa7c9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb9fecd8dd6ebb088a8550f02e84067236c9b8f25a84746c275217eb400c851\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8jfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2lxrr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.850893 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f45b52c3-5a8a-4d2d-864d-059884213e59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 06:35:23.128986 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 06:35:23.136254 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1923718831/tls.crt::/tmp/serving-cert-1923718831/tls.key\\\\\\\"\\\\nI0121 06:35:28.717373 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 06:35:28.721464 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 06:35:28.721483 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 06:35:28.721506 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 06:35:28.721510 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 06:35:28.726397 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0121 06:35:28.726427 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 06:35:28.726436 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726449 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 06:35:28.726454 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 06:35:28.726459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 06:35:28.726463 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 06:35:28.726469 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 06:35:28.729285 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859302 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859393 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859417 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.859434 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.875163 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88ca56906672a4acfbcc4b6ba3bc2aa20e8991c16a168faac1b4b16fcccc02b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7262c92e372dc34c9efe134361d63134b9edb3e06522d101505b52ec707c70db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:47Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962637 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962727 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962769 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:47 crc kubenswrapper[4913]: I0121 06:36:47.962794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:47Z","lastTransitionTime":"2026-01-21T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.065793 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066207 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066337 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.066669 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169521 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169653 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169680 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.169702 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273151 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273212 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273231 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273258 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.273277 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376129 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376181 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376298 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376394 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.376416 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.479884 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480302 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480466 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480659 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.480849 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.514516 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:24:10.880352614 +0000 UTC Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.526053 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.526112 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:48 crc kubenswrapper[4913]: E0121 06:36:48.526233 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:48 crc kubenswrapper[4913]: E0121 06:36:48.526882 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583875 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583927 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583951 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.583998 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.584023 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.686787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687056 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687203 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687326 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.687458 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791047 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791319 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791483 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.791857 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.792034 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896059 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896128 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896146 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896172 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.896192 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999523 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999542 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:48 crc kubenswrapper[4913]: I0121 06:36:48.999583 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:48Z","lastTransitionTime":"2026-01-21T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103261 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103279 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103303 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.103322 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.206824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207307 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207327 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207353 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.207370 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.310882 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311787 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.311809 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414749 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414828 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414843 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414869 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.414891 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.514992 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:26:48.979447129 +0000 UTC Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518485 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518539 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518580 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.518631 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.525895 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.525970 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:49 crc kubenswrapper[4913]: E0121 06:36:49.526067 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:49 crc kubenswrapper[4913]: E0121 06:36:49.526198 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621424 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621507 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621527 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621564 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.621630 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725252 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725315 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725332 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725358 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.725376 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829120 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829202 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829221 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829251 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.829272 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.932211 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.932583 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.932818 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.933039 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:49 crc kubenswrapper[4913]: I0121 06:36:49.933259 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:49Z","lastTransitionTime":"2026-01-21T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037334 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037359 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.037377 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140450 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140525 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140679 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140709 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.140732 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244323 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244404 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244432 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244469 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.244493 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348205 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348275 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348293 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.348334 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451376 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451438 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451460 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451489 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.451516 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.516186 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:31:40.621985324 +0000 UTC Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.525801 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:50 crc kubenswrapper[4913]: E0121 06:36:50.526536 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.527017 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:50 crc kubenswrapper[4913]: E0121 06:36:50.527167 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.544573 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"066e57b6-4c02-4f8b-a13a-1e024822f558\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f964d4e9e25a4f6701b06fc2994c156f58bad0a01598ec2d11a37524801f3cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5720e3fba723fa08e1c96791f993a216e2b1c1d9433cbfb21a8ce5ff573d31cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555360 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555421 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555444 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555475 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.555498 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.563752 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb82abfa-3ed3-4c9e-be2a-ed1b85d6ab85\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acaaa6034272af77587f88a7e6e9b7245c94eb9883d0d115644f385cf8ec2ea0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd7852cff9679e0b8703b1c44da7674832feabfa2a0149bd05804e5658eac742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7df7e5ce93f2bb6f8d24b921a595da4f0a027e38f127b6067e30fa6f20679bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0cbe9ae3458cff497886c43d42dd1ffc8274838abb7c3f096e434f24a767ad6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T06:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T06:35:11Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T06:35:10Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.584418 4913 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T06:35:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d937bd6d8e7d900820491514f11ebec7b3cb2be68e9d83cfede5f3f0fe8555a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T06:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T06:36:50Z is after 2025-08-24T17:21:41Z" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659435 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659502 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659518 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659544 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.659561 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.697811 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.697781267 podStartE2EDuration="1m21.697781267s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.677304111 +0000 UTC m=+100.473663824" watchObservedRunningTime="2026-01-21 06:36:50.697781267 +0000 UTC m=+100.494140970" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.715389 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jpn7w" podStartSLOduration=82.715362276 podStartE2EDuration="1m22.715362276s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.715354626 +0000 UTC m=+100.511714339" watchObservedRunningTime="2026-01-21 06:36:50.715362276 +0000 UTC m=+100.511721979" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763873 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763923 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763936 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763954 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.763968 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.766938 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2lxrr" podStartSLOduration=82.766910822 podStartE2EDuration="1m22.766910822s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.747417372 +0000 UTC m=+100.543777075" watchObservedRunningTime="2026-01-21 06:36:50.766910822 +0000 UTC m=+100.563270525" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.803857 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gn6lz" podStartSLOduration=82.803822757 podStartE2EDuration="1m22.803822757s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.786093243 +0000 UTC m=+100.582452916" watchObservedRunningTime="2026-01-21 06:36:50.803822757 +0000 UTC m=+100.600182440" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.804474 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cpmwx" podStartSLOduration=81.804464993 podStartE2EDuration="1m21.804464993s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.80205634 +0000 UTC m=+100.598416073" watchObservedRunningTime="2026-01-21 06:36:50.804464993 +0000 UTC m=+100.600824666" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.866371 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.866351926 podStartE2EDuration="1m22.866351926s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.866217502 +0000 UTC m=+100.662577185" watchObservedRunningTime="2026-01-21 06:36:50.866351926 +0000 UTC m=+100.662711619" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.866538 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.86653183 podStartE2EDuration="1m18.86653183s" podCreationTimestamp="2026-01-21 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.837407013 +0000 UTC m=+100.633766726" watchObservedRunningTime="2026-01-21 06:36:50.86653183 +0000 UTC m=+100.662891513" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867652 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867726 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867741 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867767 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.867783 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.884049 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podStartSLOduration=82.884019546 podStartE2EDuration="1m22.884019546s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.883156054 +0000 UTC m=+100.679515727" watchObservedRunningTime="2026-01-21 06:36:50.884019546 +0000 UTC m=+100.680379239" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975114 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975160 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975174 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975198 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:50 crc kubenswrapper[4913]: I0121 06:36:50.975213 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:50Z","lastTransitionTime":"2026-01-21T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079640 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079708 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079725 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079750 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.079767 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182565 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182667 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182690 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.182705 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285824 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285899 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285925 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285950 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.285969 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389817 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389913 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389941 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.389959 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493649 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493716 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493775 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.493799 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.517071 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:47:59.220272372 +0000 UTC Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.526502 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:51 crc kubenswrapper[4913]: E0121 06:36:51.526802 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.527207 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:51 crc kubenswrapper[4913]: E0121 06:36:51.527370 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596701 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596785 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596810 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596835 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.596853 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700179 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700254 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700287 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700318 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.700341 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803163 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803237 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803288 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803317 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.803333 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906633 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906694 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906713 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906740 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:51 crc kubenswrapper[4913]: I0121 06:36:51.906758 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:51Z","lastTransitionTime":"2026-01-21T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010087 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010157 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010182 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010217 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.010237 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113226 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113283 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113299 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113350 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.113367 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216048 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216112 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216130 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216153 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.216171 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318668 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318754 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318777 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.318794 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421482 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421524 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421538 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421556 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.421604 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.517619 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:13:22.658497583 +0000 UTC Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524513 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524550 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524561 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524574 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.524585 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.525726 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.525808 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:52 crc kubenswrapper[4913]: E0121 06:36:52.525901 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:52 crc kubenswrapper[4913]: E0121 06:36:52.526097 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.627952 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628010 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628022 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628042 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.628054 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730739 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730866 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730893 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730930 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.730954 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833244 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833292 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833308 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833328 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.833344 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936627 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936692 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936711 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936736 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:52 crc kubenswrapper[4913]: I0121 06:36:52.936755 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:52Z","lastTransitionTime":"2026-01-21T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038844 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038883 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038894 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038929 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.038941 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141552 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141655 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141672 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141698 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.141716 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244106 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244189 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244215 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244247 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.244270 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.346901 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.346961 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.346980 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.347003 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.347023 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449744 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449819 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449848 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449880 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.449903 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.517895 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:48:10.574419777 +0000 UTC Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.526174 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.526200 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:53 crc kubenswrapper[4913]: E0121 06:36:53.526313 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:53 crc kubenswrapper[4913]: E0121 06:36:53.526443 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550616 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550656 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550664 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550676 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:53 crc kubenswrapper[4913]: I0121 06:36:53.550685 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:53Z","lastTransitionTime":"2026-01-21T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189453 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189519 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189536 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189562 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.189580 4913 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T06:36:54Z","lastTransitionTime":"2026-01-21T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.218463 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-kkr2r" podStartSLOduration=85.218433346 podStartE2EDuration="1m25.218433346s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:50.939701303 +0000 UTC m=+100.736060986" watchObservedRunningTime="2026-01-21 06:36:54.218433346 +0000 UTC m=+104.014793059" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.219662 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz"] Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.220190 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.222976 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.223885 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.223956 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.224696 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.251826 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.251789677 podStartE2EDuration="27.251789677s" podCreationTimestamp="2026-01-21 06:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:54.250892752 +0000 UTC m=+104.047252435" watchObservedRunningTime="2026-01-21 06:36:54.251789677 +0000 UTC m=+104.048149350" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.265560 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=50.265553703 podStartE2EDuration="50.265553703s" podCreationTimestamp="2026-01-21 06:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:54.265365308 +0000 UTC m=+104.061724991" watchObservedRunningTime="2026-01-21 06:36:54.265553703 +0000 UTC m=+104.061913376" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324717 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/297f7c0e-6df1-49e0-821e-20cb040cba1e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324760 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/297f7c0e-6df1-49e0-821e-20cb040cba1e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324810 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324842 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f7c0e-6df1-49e0-821e-20cb040cba1e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.324892 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.425860 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/297f7c0e-6df1-49e0-821e-20cb040cba1e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.425948 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/297f7c0e-6df1-49e0-821e-20cb040cba1e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426019 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426054 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f7c0e-6df1-49e0-821e-20cb040cba1e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426119 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426219 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.426238 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/297f7c0e-6df1-49e0-821e-20cb040cba1e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.427405 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/297f7c0e-6df1-49e0-821e-20cb040cba1e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.445163 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/297f7c0e-6df1-49e0-821e-20cb040cba1e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.464468 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f7c0e-6df1-49e0-821e-20cb040cba1e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gdkqz\" (UID: \"297f7c0e-6df1-49e0-821e-20cb040cba1e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.518879 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:34:33.535404226 +0000 UTC Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.518936 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.525525 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.525529 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:54 crc kubenswrapper[4913]: E0121 06:36:54.525803 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:54 crc kubenswrapper[4913]: E0121 06:36:54.525895 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.528972 4913 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 06:36:54 crc kubenswrapper[4913]: I0121 06:36:54.537555 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" Jan 21 06:36:54 crc kubenswrapper[4913]: W0121 06:36:54.560099 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod297f7c0e_6df1_49e0_821e_20cb040cba1e.slice/crio-1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6 WatchSource:0}: Error finding container 1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6: Status 404 returned error can't find the container with id 1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6 Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.111100 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" event={"ID":"297f7c0e-6df1-49e0-821e-20cb040cba1e","Type":"ContainerStarted","Data":"f6c5682ff2f3eedfd0ec6980dfa329f8983bdd65b6a9a0d4dc7edef54f1a2292"} Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.111190 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" event={"ID":"297f7c0e-6df1-49e0-821e-20cb040cba1e","Type":"ContainerStarted","Data":"1ef96ef3caada0668e1a07224fb82b838f5e0b10fe2481662fa4f72a9ef8b1c6"} Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.133278 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gdkqz" podStartSLOduration=86.133247818 podStartE2EDuration="1m26.133247818s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:36:55.132696394 +0000 UTC m=+104.929056137" watchObservedRunningTime="2026-01-21 06:36:55.133247818 +0000 UTC m=+104.929607501" Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.526051 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:55 crc kubenswrapper[4913]: I0121 06:36:55.526057 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:55 crc kubenswrapper[4913]: E0121 06:36:55.526244 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:55 crc kubenswrapper[4913]: E0121 06:36:55.526346 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:56 crc kubenswrapper[4913]: I0121 06:36:56.525964 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:56 crc kubenswrapper[4913]: I0121 06:36:56.526106 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:56 crc kubenswrapper[4913]: E0121 06:36:56.526251 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:56 crc kubenswrapper[4913]: E0121 06:36:56.526544 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:57 crc kubenswrapper[4913]: I0121 06:36:57.526296 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:57 crc kubenswrapper[4913]: E0121 06:36:57.526443 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:57 crc kubenswrapper[4913]: I0121 06:36:57.526306 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:57 crc kubenswrapper[4913]: E0121 06:36:57.526526 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:36:58 crc kubenswrapper[4913]: I0121 06:36:58.526078 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:36:58 crc kubenswrapper[4913]: I0121 06:36:58.526120 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:36:58 crc kubenswrapper[4913]: E0121 06:36:58.526253 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:36:58 crc kubenswrapper[4913]: E0121 06:36:58.526335 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:36:59 crc kubenswrapper[4913]: I0121 06:36:59.525651 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:36:59 crc kubenswrapper[4913]: I0121 06:36:59.525698 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:36:59 crc kubenswrapper[4913]: E0121 06:36:59.525834 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:36:59 crc kubenswrapper[4913]: E0121 06:36:59.526040 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:00 crc kubenswrapper[4913]: I0121 06:37:00.525864 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:00 crc kubenswrapper[4913]: I0121 06:37:00.525940 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:00 crc kubenswrapper[4913]: E0121 06:37:00.527091 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:00 crc kubenswrapper[4913]: E0121 06:37:00.527373 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:01 crc kubenswrapper[4913]: I0121 06:37:01.528945 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:37:01 crc kubenswrapper[4913]: E0121 06:37:01.529398 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-c7xtt_openshift-ovn-kubernetes(afe1e161-7227-48ff-824e-01d26e5c7218)\"" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" Jan 21 06:37:01 crc kubenswrapper[4913]: I0121 06:37:01.529761 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:01 crc kubenswrapper[4913]: E0121 06:37:01.529863 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:01 crc kubenswrapper[4913]: I0121 06:37:01.530060 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:01 crc kubenswrapper[4913]: E0121 06:37:01.530139 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.140547 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141504 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/0.log" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141706 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" exitCode=1 Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141760 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerDied","Data":"f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6"} Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.141937 4913 scope.go:117] "RemoveContainer" containerID="9203edfc020a750af1422fa7240b9bee7d1725388a16415e639173610873ccdd" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.142431 4913 scope.go:117] "RemoveContainer" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" Jan 21 06:37:02 crc kubenswrapper[4913]: E0121 06:37:02.142807 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gn6lz_openshift-multus(b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf)\"" pod="openshift-multus/multus-gn6lz" podUID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.526189 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:02 crc kubenswrapper[4913]: E0121 06:37:02.526414 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:02 crc kubenswrapper[4913]: I0121 06:37:02.526816 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:02 crc kubenswrapper[4913]: E0121 06:37:02.527137 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:03 crc kubenswrapper[4913]: I0121 06:37:03.148104 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:37:03 crc kubenswrapper[4913]: I0121 06:37:03.525790 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:03 crc kubenswrapper[4913]: I0121 06:37:03.525797 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:03 crc kubenswrapper[4913]: E0121 06:37:03.525945 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:03 crc kubenswrapper[4913]: E0121 06:37:03.526061 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:04 crc kubenswrapper[4913]: I0121 06:37:04.526126 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:04 crc kubenswrapper[4913]: I0121 06:37:04.526180 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:04 crc kubenswrapper[4913]: E0121 06:37:04.526299 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:04 crc kubenswrapper[4913]: E0121 06:37:04.526768 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:05 crc kubenswrapper[4913]: I0121 06:37:05.526135 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:05 crc kubenswrapper[4913]: I0121 06:37:05.526156 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:05 crc kubenswrapper[4913]: E0121 06:37:05.527479 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:05 crc kubenswrapper[4913]: E0121 06:37:05.527630 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:06 crc kubenswrapper[4913]: I0121 06:37:06.525935 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:06 crc kubenswrapper[4913]: I0121 06:37:06.526003 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:06 crc kubenswrapper[4913]: E0121 06:37:06.526129 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:06 crc kubenswrapper[4913]: E0121 06:37:06.526252 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:07 crc kubenswrapper[4913]: I0121 06:37:07.525898 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:07 crc kubenswrapper[4913]: E0121 06:37:07.526090 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:07 crc kubenswrapper[4913]: I0121 06:37:07.525898 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:07 crc kubenswrapper[4913]: E0121 06:37:07.526427 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:08 crc kubenswrapper[4913]: I0121 06:37:08.526822 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:08 crc kubenswrapper[4913]: E0121 06:37:08.526998 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:08 crc kubenswrapper[4913]: I0121 06:37:08.527277 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:08 crc kubenswrapper[4913]: E0121 06:37:08.527378 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:09 crc kubenswrapper[4913]: I0121 06:37:09.525777 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:09 crc kubenswrapper[4913]: I0121 06:37:09.525837 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:09 crc kubenswrapper[4913]: E0121 06:37:09.525971 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:09 crc kubenswrapper[4913]: E0121 06:37:09.526237 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.510801 4913 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 06:37:10 crc kubenswrapper[4913]: I0121 06:37:10.525628 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:10 crc kubenswrapper[4913]: I0121 06:37:10.525740 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.528614 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.528581 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:10 crc kubenswrapper[4913]: E0121 06:37:10.631641 4913 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:37:11 crc kubenswrapper[4913]: I0121 06:37:11.526298 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:11 crc kubenswrapper[4913]: I0121 06:37:11.526394 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:11 crc kubenswrapper[4913]: E0121 06:37:11.526484 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:11 crc kubenswrapper[4913]: E0121 06:37:11.526649 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:12 crc kubenswrapper[4913]: I0121 06:37:12.525760 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:12 crc kubenswrapper[4913]: I0121 06:37:12.525776 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:12 crc kubenswrapper[4913]: E0121 06:37:12.526007 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:12 crc kubenswrapper[4913]: E0121 06:37:12.526504 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:12 crc kubenswrapper[4913]: I0121 06:37:12.526946 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.187151 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.189496 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerStarted","Data":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.190159 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.228574 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podStartSLOduration=105.228554312 podStartE2EDuration="1m45.228554312s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:13.228300425 +0000 UTC m=+123.024660168" watchObservedRunningTime="2026-01-21 06:37:13.228554312 +0000 UTC m=+123.024913985" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.520346 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wfcsc"] Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.520495 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:13 crc kubenswrapper[4913]: E0121 06:37:13.520656 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.526428 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:13 crc kubenswrapper[4913]: E0121 06:37:13.526683 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:13 crc kubenswrapper[4913]: I0121 06:37:13.527139 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:13 crc kubenswrapper[4913]: E0121 06:37:13.527274 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:14 crc kubenswrapper[4913]: I0121 06:37:14.526854 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:14 crc kubenswrapper[4913]: E0121 06:37:14.527016 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526126 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526221 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526642 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.526675 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.526785 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:15 crc kubenswrapper[4913]: I0121 06:37:15.526833 4913 scope.go:117] "RemoveContainer" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.526932 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:15 crc kubenswrapper[4913]: E0121 06:37:15.633174 4913 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:37:16 crc kubenswrapper[4913]: I0121 06:37:16.209198 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:37:16 crc kubenswrapper[4913]: I0121 06:37:16.209252 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70"} Jan 21 06:37:16 crc kubenswrapper[4913]: I0121 06:37:16.526506 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:16 crc kubenswrapper[4913]: E0121 06:37:16.526735 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:17 crc kubenswrapper[4913]: I0121 06:37:17.525974 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:17 crc kubenswrapper[4913]: I0121 06:37:17.526013 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:17 crc kubenswrapper[4913]: E0121 06:37:17.526182 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:17 crc kubenswrapper[4913]: I0121 06:37:17.526298 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:17 crc kubenswrapper[4913]: E0121 06:37:17.526435 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:17 crc kubenswrapper[4913]: E0121 06:37:17.526520 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:18 crc kubenswrapper[4913]: I0121 06:37:18.526386 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:18 crc kubenswrapper[4913]: E0121 06:37:18.526635 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:19 crc kubenswrapper[4913]: I0121 06:37:19.526429 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:19 crc kubenswrapper[4913]: I0121 06:37:19.526501 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:19 crc kubenswrapper[4913]: E0121 06:37:19.526720 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 06:37:19 crc kubenswrapper[4913]: E0121 06:37:19.527163 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wfcsc" podUID="60ed8982-ee20-4330-861f-61509c39bbe7" Jan 21 06:37:19 crc kubenswrapper[4913]: I0121 06:37:19.532663 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:19 crc kubenswrapper[4913]: E0121 06:37:19.532878 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 06:37:20 crc kubenswrapper[4913]: I0121 06:37:20.527020 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:20 crc kubenswrapper[4913]: E0121 06:37:20.528307 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.525721 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.525919 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.526153 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.529437 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.530805 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.530923 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 06:37:21 crc kubenswrapper[4913]: I0121 06:37:21.532508 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 06:37:22 crc kubenswrapper[4913]: I0121 06:37:22.525942 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:22 crc kubenswrapper[4913]: I0121 06:37:22.529148 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 06:37:22 crc kubenswrapper[4913]: I0121 06:37:22.529377 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.073533 4913 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.126652 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8kvjs"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.127358 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.130490 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.131629 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134118 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134126 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134250 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.134568 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.135484 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.143903 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.166802 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.167429 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j966n"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.167872 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.168333 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.168820 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.169232 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.172691 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p4428"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.173585 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.174337 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.174825 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177045 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177099 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-config\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177126 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc8173-94d9-419d-9031-b0664a3f01e4-serving-cert\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177159 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177183 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177203 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-service-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177247 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177267 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177285 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtft\" (UniqueName: \"kubernetes.io/projected/19fc8173-94d9-419d-9031-b0664a3f01e4-kube-api-access-7gtft\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.177306 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.179703 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.180630 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.181617 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182556 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182723 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182753 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.182980 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.183248 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.185165 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.185389 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.188781 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5fgwx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.189351 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.194547 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.194750 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195201 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195349 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195548 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195687 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195861 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195918 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195579 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.195548 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196257 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196370 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196384 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196470 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196494 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196262 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196783 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.196908 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197156 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-k6jdd"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197259 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197317 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197471 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197487 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197561 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197576 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197756 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197757 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197833 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197955 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.197973 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198048 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198155 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198246 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198257 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198617 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198773 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198811 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198923 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199135 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198439 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.198476 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199280 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199293 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199439 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199482 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199550 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199726 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.199894 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.200121 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.200381 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.201654 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.201669 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.213366 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f95sb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.214297 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.215752 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.216521 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.222930 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.226461 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.227286 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.229027 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k855s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.241051 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.241338 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.241475 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.242793 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243391 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.242799 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243759 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243973 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6plkm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.243174 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244741 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244074 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244147 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.244518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.250420 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.250461 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.253471 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.254879 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.266233 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.266781 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.266814 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.267688 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.268218 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.268297 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.268600 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.269688 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.270115 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.270223 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.271617 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274093 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274122 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274182 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274366 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274484 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274602 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274697 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274733 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274842 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274875 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274948 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275012 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275033 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274708 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.274842 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275183 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275273 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275373 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275466 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.275548 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.276085 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.276550 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.279862 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280091 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-serving-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280127 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkl57\" (UniqueName: \"kubernetes.io/projected/8a371e85-6173-4802-976d-7ee68bc9afdc-kube-api-access-qkl57\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280151 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280177 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280191 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280217 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msv6f\" (UniqueName: \"kubernetes.io/projected/0ee14186-f787-47f1-8537-8cb2210ac28c-kube-api-access-msv6f\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280242 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a371e85-6173-4802-976d-7ee68bc9afdc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280264 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp4zs\" (UniqueName: \"kubernetes.io/projected/3dc93a0c-f8e0-4c76-a032-6d3e34878168-kube-api-access-dp4zs\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280287 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6m5w\" (UniqueName: \"kubernetes.io/projected/6b1d8220-775c-47a7-a772-00eacc2f957c-kube-api-access-j6m5w\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280309 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280332 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280331 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280454 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280479 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzm42\" (UniqueName: \"kubernetes.io/projected/70da4912-d52e-41a4-bf05-91f3f377d243-kube-api-access-zzm42\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280501 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c207fbab-618a-4c01-8450-cb7ffad0f50d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280526 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a371e85-6173-4802-976d-7ee68bc9afdc-serving-cert\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280547 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280569 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280615 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280652 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-encryption-config\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280673 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-console-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280697 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280722 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnbk\" (UniqueName: \"kubernetes.io/projected/208b512b-e1b8-4df9-9ec2-0f30bea24a20-kube-api-access-xjnbk\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280758 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-config\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280779 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-client\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280799 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-oauth-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280820 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5v4\" (UniqueName: \"kubernetes.io/projected/c207fbab-618a-4c01-8450-cb7ffad0f50d-kube-api-access-bd5v4\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280840 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280863 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280873 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49vtr"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280900 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280922 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280942 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280964 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9zw\" (UniqueName: \"kubernetes.io/projected/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-kube-api-access-5w9zw\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.280985 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.289655 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.291947 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.292194 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.294023 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.294103 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.296915 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.297871 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.298226 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.298524 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299465 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.281004 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299839 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-client\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299877 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-service-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299914 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zwkm\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-kube-api-access-8zwkm\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.299936 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4e8188-571a-4f41-8665-0565bf75f0d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300021 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6j8z\" (UniqueName: \"kubernetes.io/projected/08ac51dd-419d-4632-8a49-1972be301121-kube-api-access-f6j8z\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300045 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-serving-cert\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300077 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25cm\" (UniqueName: \"kubernetes.io/projected/57e1cc03-984e-4486-8393-f80bc1aa94af-kube-api-access-x25cm\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300097 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-config\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300114 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-serving-cert\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300130 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmvf\" (UniqueName: \"kubernetes.io/projected/026a670d-684f-4eb6-bda0-bd60294d3b95-kube-api-access-8bmvf\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300149 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300168 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-serving-cert\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300191 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300206 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-service-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300224 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-client\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300241 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300282 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-audit\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300306 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-oauth-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300326 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-auth-proxy-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300356 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300454 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300474 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtft\" (UniqueName: \"kubernetes.io/projected/19fc8173-94d9-419d-9031-b0664a3f01e4-kube-api-access-7gtft\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300492 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-policies\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300510 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-service-ca\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300529 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-trusted-ca\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300548 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/465393d8-5293-482f-8f3b-91578b3ba57b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300565 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1d8220-775c-47a7-a772-00eacc2f957c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300585 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1d8220-775c-47a7-a772-00eacc2f957c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300639 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300661 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300695 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-images\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300725 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300745 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4e8188-571a-4f41-8665-0565bf75f0d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300760 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48edf52b-d54b-4116-95d0-f8051704a4e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300804 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300828 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-config\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300863 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxdk\" (UniqueName: \"kubernetes.io/projected/465393d8-5293-482f-8f3b-91578b3ba57b-kube-api-access-smxdk\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300906 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300922 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-node-pullsecrets\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300945 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc8173-94d9-419d-9031-b0664a3f01e4-serving-cert\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300966 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-config\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.300985 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-machine-approver-tls\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301000 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-encryption-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301017 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e8188-571a-4f41-8665-0565bf75f0d3-config\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301058 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301079 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70da4912-d52e-41a4-bf05-91f3f377d243-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301105 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208b512b-e1b8-4df9-9ec2-0f30bea24a20-serving-cert\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301124 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5sd\" (UniqueName: \"kubernetes.io/projected/c5567f5a-5084-4cc6-b654-f1190dcc0064-kube-api-access-cj5sd\") pod \"downloads-7954f5f757-k855s\" (UID: \"c5567f5a-5084-4cc6-b654-f1190dcc0064\") " pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.301822 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.302408 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-config\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303318 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48edf52b-d54b-4116-95d0-f8051704a4e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303407 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-audit-dir\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303454 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303511 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303549 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70da4912-d52e-41a4-bf05-91f3f377d243-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303582 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303630 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-dir\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303657 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303691 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.303904 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305313 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57e1cc03-984e-4486-8393-f80bc1aa94af-metrics-tls\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305361 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-trusted-ca-bundle\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305390 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305451 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305480 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305505 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-image-import-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.305697 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.306451 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fc8173-94d9-419d-9031-b0664a3f01e4-service-ca-bundle\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.306489 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.306934 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fc8173-94d9-419d-9031-b0664a3f01e4-serving-cert\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.307370 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.313852 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.316020 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.316256 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.317384 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.317626 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.321053 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.321511 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6rtq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.322829 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.323120 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.323288 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.323938 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.324094 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.329355 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-cxnpf"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.329829 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.330048 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.330697 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.331198 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.332653 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.333530 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.333824 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.334762 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kq7d8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.334873 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.335336 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.335643 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8kvjs"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.337131 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.337687 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.338703 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.339573 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.340038 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j966n"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.341060 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.342052 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.342948 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.343923 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.345728 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bkrnj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.346661 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.347972 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.349283 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.349540 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k6jdd"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.350822 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f95sb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.354414 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.354459 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.358453 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.359257 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p4428"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.361194 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.362550 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.367931 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5fgwx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.369245 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6rtq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.369659 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.370630 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k855s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.371780 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.373006 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49vtr"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.374348 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.375486 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kq7d8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.376751 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.378046 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jlcqw"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.379299 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.379387 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kqctf"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.380423 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.381122 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.382618 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jlcqw"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.383721 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.384775 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6plkm"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.385752 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.386783 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.388569 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.389277 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.389977 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.391576 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.392662 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.396324 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.399074 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.400911 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bkrnj"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.402360 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.404400 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.405534 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5gjk2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406139 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406779 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-serving-cert\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406789 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5gjk2"] Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406810 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406829 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406843 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-service-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406857 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-client\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-audit\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406903 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-oauth-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406922 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-auth-proxy-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406943 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-policies\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406957 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-trusted-ca\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406971 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/465393d8-5293-482f-8f3b-91578b3ba57b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.406987 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1d8220-775c-47a7-a772-00eacc2f957c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407001 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1d8220-775c-47a7-a772-00eacc2f957c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407035 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407050 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-service-ca\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407070 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407097 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407127 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-images\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407143 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407160 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4e8188-571a-4f41-8665-0565bf75f0d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407176 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407193 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407210 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxdk\" (UniqueName: \"kubernetes.io/projected/465393d8-5293-482f-8f3b-91578b3ba57b-kube-api-access-smxdk\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407234 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48edf52b-d54b-4116-95d0-f8051704a4e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407258 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407280 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-node-pullsecrets\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407302 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-config\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407323 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-machine-approver-tls\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407344 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-encryption-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407365 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e8188-571a-4f41-8665-0565bf75f0d3-config\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407385 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70da4912-d52e-41a4-bf05-91f3f377d243-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407403 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208b512b-e1b8-4df9-9ec2-0f30bea24a20-serving-cert\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407422 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5sd\" (UniqueName: \"kubernetes.io/projected/c5567f5a-5084-4cc6-b654-f1190dcc0064-kube-api-access-cj5sd\") pod \"downloads-7954f5f757-k855s\" (UID: \"c5567f5a-5084-4cc6-b654-f1190dcc0064\") " pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407443 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gbh\" (UniqueName: \"kubernetes.io/projected/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-kube-api-access-25gbh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407464 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48edf52b-d54b-4116-95d0-f8051704a4e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407482 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95rdw\" (UniqueName: \"kubernetes.io/projected/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-kube-api-access-95rdw\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407502 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407527 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70da4912-d52e-41a4-bf05-91f3f377d243-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407545 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407564 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-dir\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407625 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407646 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-audit-dir\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407665 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57e1cc03-984e-4486-8393-f80bc1aa94af-metrics-tls\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407684 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407704 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-trusted-ca-bundle\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407729 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407753 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407898 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407921 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-image-import-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407928 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-service-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.407944 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cdf7744-1629-46a4-b176-0fc75c149a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408066 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-serving-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408124 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408158 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkl57\" (UniqueName: \"kubernetes.io/projected/8a371e85-6173-4802-976d-7ee68bc9afdc-kube-api-access-qkl57\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408175 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msv6f\" (UniqueName: \"kubernetes.io/projected/0ee14186-f787-47f1-8537-8cb2210ac28c-kube-api-access-msv6f\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408194 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a371e85-6173-4802-976d-7ee68bc9afdc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408228 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp4zs\" (UniqueName: \"kubernetes.io/projected/3dc93a0c-f8e0-4c76-a032-6d3e34878168-kube-api-access-dp4zs\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408246 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6m5w\" (UniqueName: \"kubernetes.io/projected/6b1d8220-775c-47a7-a772-00eacc2f957c-kube-api-access-j6m5w\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408264 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408282 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408299 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408315 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzm42\" (UniqueName: \"kubernetes.io/projected/70da4912-d52e-41a4-bf05-91f3f377d243-kube-api-access-zzm42\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408332 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c207fbab-618a-4c01-8450-cb7ffad0f50d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408351 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a371e85-6173-4802-976d-7ee68bc9afdc-serving-cert\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408366 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408383 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408426 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-encryption-config\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408446 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-console-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408463 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408485 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnbk\" (UniqueName: \"kubernetes.io/projected/208b512b-e1b8-4df9-9ec2-0f30bea24a20-kube-api-access-xjnbk\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408501 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-config\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408517 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-client\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408534 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-oauth-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408552 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5v4\" (UniqueName: \"kubernetes.io/projected/c207fbab-618a-4c01-8450-cb7ffad0f50d-kube-api-access-bd5v4\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408561 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408568 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408633 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408664 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408706 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbjg\" (UniqueName: \"kubernetes.io/projected/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-kube-api-access-nvbjg\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408737 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408759 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408781 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408803 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvsvx\" (UniqueName: \"kubernetes.io/projected/6cdf7744-1629-46a4-b176-0fc75c149a95-kube-api-access-qvsvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408827 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9zw\" (UniqueName: \"kubernetes.io/projected/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-kube-api-access-5w9zw\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408852 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.408871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409001 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409053 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-client\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409082 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409107 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zwkm\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-kube-api-access-8zwkm\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409129 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4e8188-571a-4f41-8665-0565bf75f0d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409146 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6j8z\" (UniqueName: \"kubernetes.io/projected/08ac51dd-419d-4632-8a49-1972be301121-kube-api-access-f6j8z\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409163 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-serving-cert\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409182 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25cm\" (UniqueName: \"kubernetes.io/projected/57e1cc03-984e-4486-8393-f80bc1aa94af-kube-api-access-x25cm\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409221 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-config\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409237 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-serving-cert\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409253 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmvf\" (UniqueName: \"kubernetes.io/projected/026a670d-684f-4eb6-bda0-bd60294d3b95-kube-api-access-8bmvf\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409447 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409514 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-config\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.409579 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-serving-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.410089 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.410857 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-client\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.411420 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.411565 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-audit\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.411912 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412148 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412503 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-etcd-ca\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412633 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.412897 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.413129 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.413405 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414001 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-config\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414045 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c207fbab-618a-4c01-8450-cb7ffad0f50d-images\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414374 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414667 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c207fbab-618a-4c01-8450-cb7ffad0f50d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414727 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8a371e85-6173-4802-976d-7ee68bc9afdc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.414957 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-auth-proxy-config\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415228 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415556 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-encryption-config\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415582 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.415741 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b1d8220-775c-47a7-a772-00eacc2f957c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416018 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416088 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-console-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416340 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-serving-cert\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416447 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-policies\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.416681 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-service-ca\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417011 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/208b512b-e1b8-4df9-9ec2-0f30bea24a20-trusted-ca\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417170 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-oauth-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417263 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417375 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-node-pullsecrets\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417403 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee14186-f787-47f1-8537-8cb2210ac28c-audit-dir\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417485 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b1d8220-775c-47a7-a772-00eacc2f957c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417579 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dc93a0c-f8e0-4c76-a032-6d3e34878168-config\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.417950 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4e8188-571a-4f41-8665-0565bf75f0d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418019 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a4e8188-571a-4f41-8665-0565bf75f0d3-config\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418020 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418363 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/026a670d-684f-4eb6-bda0-bd60294d3b95-audit-dir\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.418787 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70da4912-d52e-41a4-bf05-91f3f377d243-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419072 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ac51dd-419d-4632-8a49-1972be301121-trusted-ca-bundle\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419416 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/026a670d-684f-4eb6-bda0-bd60294d3b95-image-import-ca\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419494 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-encryption-config\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419496 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.419608 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-machine-approver-tls\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.420169 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/465393d8-5293-482f-8f3b-91578b3ba57b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.420900 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.421186 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-etcd-client\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.421408 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-oauth-config\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.421628 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/026a670d-684f-4eb6-bda0-bd60294d3b95-etcd-client\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423329 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70da4912-d52e-41a4-bf05-91f3f377d243-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423365 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57e1cc03-984e-4486-8393-f80bc1aa94af-metrics-tls\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423381 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.423543 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08ac51dd-419d-4632-8a49-1972be301121-console-serving-cert\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.424186 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.424958 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee14186-f787-47f1-8537-8cb2210ac28c-serving-cert\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.425837 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/208b512b-e1b8-4df9-9ec2-0f30bea24a20-serving-cert\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.429302 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.430417 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.440121 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dc93a0c-f8e0-4c76-a032-6d3e34878168-serving-cert\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.452486 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.474088 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.476907 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.489296 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.500823 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.509426 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.509918 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.509961 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvsvx\" (UniqueName: \"kubernetes.io/projected/6cdf7744-1629-46a4-b176-0fc75c149a95-kube-api-access-qvsvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510036 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510077 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95rdw\" (UniqueName: \"kubernetes.io/projected/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-kube-api-access-95rdw\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510163 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gbh\" (UniqueName: \"kubernetes.io/projected/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-kube-api-access-25gbh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510204 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510238 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cdf7744-1629-46a4-b176-0fc75c149a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510280 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510383 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.510429 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbjg\" (UniqueName: \"kubernetes.io/projected/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-kube-api-access-nvbjg\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.529355 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.549078 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.569890 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.589549 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.599559 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/48edf52b-d54b-4116-95d0-f8051704a4e3-metrics-tls\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.616253 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.619558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48edf52b-d54b-4116-95d0-f8051704a4e3-trusted-ca\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.629170 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.649859 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.656214 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a371e85-6173-4802-976d-7ee68bc9afdc-serving-cert\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.670228 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.690066 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.710168 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.729452 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.749451 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.754636 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.778457 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.790195 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.810416 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.821734 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.829454 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.850549 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.870371 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.890810 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.910298 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.950454 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.970462 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 06:37:25 crc kubenswrapper[4913]: I0121 06:37:25.990427 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.009657 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.030940 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.051383 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.070203 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.090886 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.110157 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.160461 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.161051 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.169894 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.190168 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.234650 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtft\" (UniqueName: \"kubernetes.io/projected/19fc8173-94d9-419d-9031-b0664a3f01e4-kube-api-access-7gtft\") pod \"authentication-operator-69f744f599-8kvjs\" (UID: \"19fc8173-94d9-419d-9031-b0664a3f01e4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.250166 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.254873 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"controller-manager-879f6c89f-bclp4\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.270373 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.290544 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.310581 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.328885 4913 request.go:700] Waited for 1.011016508s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.331381 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.346676 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6cdf7744-1629-46a4-b176-0fc75c149a95-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.349894 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.366806 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.370419 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.390385 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.393950 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.410686 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.431392 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.450274 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.469921 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.493201 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.510838 4913 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.511006 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert podName:fdb0c051-dafc-4d42-8c28-d28c049eb0f7 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.010961174 +0000 UTC m=+136.807320897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert") pod "catalog-operator-68c6474976-cjqvz" (UID: "fdb0c051-dafc-4d42-8c28-d28c049eb0f7") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.511977 4913 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512098 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key podName:56b4a4e7-bb42-437e-8dce-70cbc917c7a8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.012075294 +0000 UTC m=+136.808435007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key") pod "service-ca-9c57cc56f-kq7d8" (UID: "56b4a4e7-bb42-437e-8dce-70cbc917c7a8") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512096 4913 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512268 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert podName:fdb0c051-dafc-4d42-8c28-d28c049eb0f7 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.012240178 +0000 UTC m=+136.808599891 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert") pod "catalog-operator-68c6474976-cjqvz" (UID: "fdb0c051-dafc-4d42-8c28-d28c049eb0f7") : failed to sync secret cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512122 4913 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: E0121 06:37:26.512377 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle podName:56b4a4e7-bb42-437e-8dce-70cbc917c7a8 nodeName:}" failed. No retries permitted until 2026-01-21 06:37:27.012362461 +0000 UTC m=+136.808722174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle") pod "service-ca-9c57cc56f-kq7d8" (UID: "56b4a4e7-bb42-437e-8dce-70cbc917c7a8") : failed to sync configmap cache: timed out waiting for the condition Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.512528 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.530201 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.549784 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.576015 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.589647 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.610116 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.629689 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.636468 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8kvjs"] Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.649666 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.659149 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:37:26 crc kubenswrapper[4913]: W0121 06:37:26.668657 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod527ef351_fb35_4f58_ae7b_d410c23496c6.slice/crio-67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc WatchSource:0}: Error finding container 67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc: Status 404 returned error can't find the container with id 67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.669653 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.690239 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.709578 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.729664 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.749865 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.769728 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.789829 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.810729 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.830699 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.851562 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.870469 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.889764 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.910351 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.929648 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.952431 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.969892 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 06:37:26 crc kubenswrapper[4913]: I0121 06:37:26.989781 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.010042 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032253 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032407 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032730 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.032790 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.033898 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-cabundle\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.039757 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-srv-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.039905 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-profile-collector-cert\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.040151 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-signing-key\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.051137 4913 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.070259 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.090283 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.111323 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.131015 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.151764 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.170827 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.190187 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.210299 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.230126 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.264490 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp4zs\" (UniqueName: \"kubernetes.io/projected/3dc93a0c-f8e0-4c76-a032-6d3e34878168-kube-api-access-dp4zs\") pod \"etcd-operator-b45778765-6plkm\" (UID: \"3dc93a0c-f8e0-4c76-a032-6d3e34878168\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.268032 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerStarted","Data":"e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.268185 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerStarted","Data":"67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.268915 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270276 4913 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bclp4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270322 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270508 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" event={"ID":"19fc8173-94d9-419d-9031-b0664a3f01e4","Type":"ContainerStarted","Data":"979142fe7086d20d808b547cc993bc6a74e3ad1c7b59d0514973ebca8333c021"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.270615 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" event={"ID":"19fc8173-94d9-419d-9031-b0664a3f01e4","Type":"ContainerStarted","Data":"1ce9b8e54eddc8eb47fd9f20dd34ebd104adde7eba0ddb229943dc0f28101a43"} Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.292493 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzm42\" (UniqueName: \"kubernetes.io/projected/70da4912-d52e-41a4-bf05-91f3f377d243-kube-api-access-zzm42\") pod \"openshift-apiserver-operator-796bbdcf4f-gd6rp\" (UID: \"70da4912-d52e-41a4-bf05-91f3f377d243\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.303875 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6m5w\" (UniqueName: \"kubernetes.io/projected/6b1d8220-775c-47a7-a772-00eacc2f957c-kube-api-access-j6m5w\") pod \"openshift-controller-manager-operator-756b6f6bc6-swrpx\" (UID: \"6b1d8220-775c-47a7-a772-00eacc2f957c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.327316 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmvf\" (UniqueName: \"kubernetes.io/projected/026a670d-684f-4eb6-bda0-bd60294d3b95-kube-api-access-8bmvf\") pod \"apiserver-76f77b778f-p4428\" (UID: \"026a670d-684f-4eb6-bda0-bd60294d3b95\") " pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.345084 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msv6f\" (UniqueName: \"kubernetes.io/projected/0ee14186-f787-47f1-8537-8cb2210ac28c-kube-api-access-msv6f\") pod \"apiserver-7bbb656c7d-b8vxc\" (UID: \"0ee14186-f787-47f1-8537-8cb2210ac28c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.348859 4913 request.go:700] Waited for 1.937505382s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.370061 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkl57\" (UniqueName: \"kubernetes.io/projected/8a371e85-6173-4802-976d-7ee68bc9afdc-kube-api-access-qkl57\") pod \"openshift-config-operator-7777fb866f-jn7zt\" (UID: \"8a371e85-6173-4802-976d-7ee68bc9afdc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.381095 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"oauth-openshift-558db77b4-b6p62\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.407422 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnbk\" (UniqueName: \"kubernetes.io/projected/208b512b-e1b8-4df9-9ec2-0f30bea24a20-kube-api-access-xjnbk\") pod \"console-operator-58897d9998-j966n\" (UID: \"208b512b-e1b8-4df9-9ec2-0f30bea24a20\") " pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.426118 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zwkm\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-kube-api-access-8zwkm\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.435132 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.443378 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.454072 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4e8188-571a-4f41-8665-0565bf75f0d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kgg79\" (UID: \"6a4e8188-571a-4f41-8665-0565bf75f0d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.464876 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.465288 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6j8z\" (UniqueName: \"kubernetes.io/projected/08ac51dd-419d-4632-8a49-1972be301121-kube-api-access-f6j8z\") pod \"console-f9d7485db-k6jdd\" (UID: \"08ac51dd-419d-4632-8a49-1972be301121\") " pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.483027 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.487779 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.489691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9zw\" (UniqueName: \"kubernetes.io/projected/6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01-kube-api-access-5w9zw\") pod \"machine-approver-56656f9798-w5pm5\" (UID: \"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.509847 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25cm\" (UniqueName: \"kubernetes.io/projected/57e1cc03-984e-4486-8393-f80bc1aa94af-kube-api-access-x25cm\") pod \"dns-operator-744455d44c-f95sb\" (UID: \"57e1cc03-984e-4486-8393-f80bc1aa94af\") " pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.524732 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxdk\" (UniqueName: \"kubernetes.io/projected/465393d8-5293-482f-8f3b-91578b3ba57b-kube-api-access-smxdk\") pod \"cluster-samples-operator-665b6dd947-4bs7p\" (UID: \"465393d8-5293-482f-8f3b-91578b3ba57b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.549017 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"route-controller-manager-6576b87f9c-tbgjj\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.556843 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.563383 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.568139 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48edf52b-d54b-4116-95d0-f8051704a4e3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-5p75s\" (UID: \"48edf52b-d54b-4116-95d0-f8051704a4e3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.593816 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.594647 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.601395 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.601917 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5v4\" (UniqueName: \"kubernetes.io/projected/c207fbab-618a-4c01-8450-cb7ffad0f50d-kube-api-access-bd5v4\") pod \"machine-api-operator-5694c8668f-5fgwx\" (UID: \"c207fbab-618a-4c01-8450-cb7ffad0f50d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.612360 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"marketplace-operator-79b997595-qjrx8\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.623300 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.633794 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5sd\" (UniqueName: \"kubernetes.io/projected/c5567f5a-5084-4cc6-b654-f1190dcc0064-kube-api-access-cj5sd\") pod \"downloads-7954f5f757-k855s\" (UID: \"c5567f5a-5084-4cc6-b654-f1190dcc0064\") " pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.659992 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvsvx\" (UniqueName: \"kubernetes.io/projected/6cdf7744-1629-46a4-b176-0fc75c149a95-kube-api-access-qvsvx\") pod \"control-plane-machine-set-operator-78cbb6b69f-5gzt8\" (UID: \"6cdf7744-1629-46a4-b176-0fc75c149a95\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.670347 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95rdw\" (UniqueName: \"kubernetes.io/projected/fdb0c051-dafc-4d42-8c28-d28c049eb0f7-kube-api-access-95rdw\") pod \"catalog-operator-68c6474976-cjqvz\" (UID: \"fdb0c051-dafc-4d42-8c28-d28c049eb0f7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.674114 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.694843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gbh\" (UniqueName: \"kubernetes.io/projected/d6e5f1ef-7cb7-4909-beaf-cd352767d0ca-kube-api-access-25gbh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5flq2\" (UID: \"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.705148 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbjg\" (UniqueName: \"kubernetes.io/projected/56b4a4e7-bb42-437e-8dce-70cbc917c7a8-kube-api-access-nvbjg\") pod \"service-ca-9c57cc56f-kq7d8\" (UID: \"56b4a4e7-bb42-437e-8dce-70cbc917c7a8\") " pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.706082 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.729029 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741400 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-proxy-tls\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741452 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jwg\" (UniqueName: \"kubernetes.io/projected/f6ca48b3-019f-4481-b136-7d392b7073d8-kube-api-access-b2jwg\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741487 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6ca48b3-019f-4481-b136-7d392b7073d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741510 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/238fcbbb-ece2-4108-b4be-79ed872e541d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741576 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c174a67-522b-4d34-ba66-905ff560f206-metrics-tls\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741620 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741655 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0ca241-c740-42a3-8fd9-970024126d64-service-ca-bundle\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741701 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741717 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741743 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741791 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gbf\" (UniqueName: \"kubernetes.io/projected/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-kube-api-access-r9gbf\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741806 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-config\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741844 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741891 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-metrics-certs\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741930 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xr5\" (UniqueName: \"kubernetes.io/projected/6c174a67-522b-4d34-ba66-905ff560f206-kube-api-access-q6xr5\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741955 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-tmpfs\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.741969 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4rt\" (UniqueName: \"kubernetes.io/projected/3f92f014-e88f-4e07-8f20-892e47c5de80-kube-api-access-kd4rt\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742005 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742019 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q94j\" (UniqueName: \"kubernetes.io/projected/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-kube-api-access-6q94j\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742034 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc94523-e315-4913-8ea8-ffa72274f5ab-serving-cert\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742056 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742115 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-srv-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742172 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742186 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742211 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742243 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c174a67-522b-4d34-ba66-905ff560f206-config-volume\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742276 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742315 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742330 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sh9l\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-kube-api-access-7sh9l\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742382 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742398 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnmv\" (UniqueName: \"kubernetes.io/projected/9ef3dfdf-4ae9-4baa-a830-e50b4942dd32-kube-api-access-gsnmv\") pod \"migrator-59844c95c7-5l5kl\" (UID: \"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742434 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmd4\" (UniqueName: \"kubernetes.io/projected/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-kube-api-access-fxmd4\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742451 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/238fcbbb-ece2-4108-b4be-79ed872e541d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742468 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742484 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f92f014-e88f-4e07-8f20-892e47c5de80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742526 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89ft\" (UniqueName: \"kubernetes.io/projected/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-kube-api-access-b89ft\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742540 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742623 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742648 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-images\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742671 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742687 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcpn\" (UniqueName: \"kubernetes.io/projected/2dc94523-e315-4913-8ea8-ffa72274f5ab-kube-api-access-tzcpn\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742727 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742777 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742844 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742877 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-stats-auth\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.742960 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.743001 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ct2\" (UniqueName: \"kubernetes.io/projected/3e0ca241-c740-42a3-8fd9-970024126d64-kube-api-access-77ct2\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.743033 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6ca48b3-019f-4481-b136-7d392b7073d8-proxy-tls\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.744399 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-default-certificate\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.744434 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc94523-e315-4913-8ea8-ffa72274f5ab-config\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.744478 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-webhook-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: E0121 06:37:27.749346 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.249331111 +0000 UTC m=+138.045690784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.750775 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.774181 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.797071 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.806964 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849099 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tpw\" (UniqueName: \"kubernetes.io/projected/08d20980-2196-4efa-952e-defded465fb4-kube-api-access-c2tpw\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849339 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6ca48b3-019f-4481-b136-7d392b7073d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849366 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/238fcbbb-ece2-4108-b4be-79ed872e541d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849385 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43177e9e-03e6-4864-843a-c753a096648f-cert\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849411 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c174a67-522b-4d34-ba66-905ff560f206-metrics-tls\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849430 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0ca241-c740-42a3-8fd9-970024126d64-service-ca-bundle\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849466 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxz9\" (UniqueName: \"kubernetes.io/projected/3243dc09-2f27-4905-a1cc-08ff6d1e270f-kube-api-access-2rxz9\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849483 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849498 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849512 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849527 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-csi-data-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849544 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9gbf\" (UniqueName: \"kubernetes.io/projected/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-kube-api-access-r9gbf\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849560 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-config\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849609 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849629 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-metrics-certs\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849653 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xr5\" (UniqueName: \"kubernetes.io/projected/6c174a67-522b-4d34-ba66-905ff560f206-kube-api-access-q6xr5\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849680 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-tmpfs\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849699 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4rt\" (UniqueName: \"kubernetes.io/projected/3f92f014-e88f-4e07-8f20-892e47c5de80-kube-api-access-kd4rt\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849716 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849732 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q94j\" (UniqueName: \"kubernetes.io/projected/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-kube-api-access-6q94j\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849747 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc94523-e315-4913-8ea8-ffa72274f5ab-serving-cert\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849762 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-srv-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849811 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849825 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849841 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849859 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c174a67-522b-4d34-ba66-905ff560f206-config-volume\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849874 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849888 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sh9l\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-kube-api-access-7sh9l\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849903 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-mountpoint-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849918 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849941 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-socket-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849961 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.849983 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnmv\" (UniqueName: \"kubernetes.io/projected/9ef3dfdf-4ae9-4baa-a830-e50b4942dd32-kube-api-access-gsnmv\") pod \"migrator-59844c95c7-5l5kl\" (UID: \"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850005 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmd4\" (UniqueName: \"kubernetes.io/projected/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-kube-api-access-fxmd4\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850019 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/238fcbbb-ece2-4108-b4be-79ed872e541d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850036 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850052 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f92f014-e88f-4e07-8f20-892e47c5de80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850069 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89ft\" (UniqueName: \"kubernetes.io/projected/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-kube-api-access-b89ft\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850087 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850109 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-certs\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850162 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-node-bootstrap-token\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850182 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-images\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850199 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850214 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcpn\" (UniqueName: \"kubernetes.io/projected/2dc94523-e315-4913-8ea8-ffa72274f5ab-kube-api-access-tzcpn\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850231 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-registration-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850255 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850283 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850302 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-stats-auth\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850321 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv67c\" (UniqueName: \"kubernetes.io/projected/43177e9e-03e6-4864-843a-c753a096648f-kube-api-access-cv67c\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850346 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850381 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77ct2\" (UniqueName: \"kubernetes.io/projected/3e0ca241-c740-42a3-8fd9-970024126d64-kube-api-access-77ct2\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850399 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6ca48b3-019f-4481-b136-7d392b7073d8-proxy-tls\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850424 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-default-certificate\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850438 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc94523-e315-4913-8ea8-ffa72274f5ab-config\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850452 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-webhook-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850467 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-proxy-tls\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850482 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-plugins-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.850497 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2jwg\" (UniqueName: \"kubernetes.io/projected/f6ca48b3-019f-4481-b136-7d392b7073d8-kube-api-access-b2jwg\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: E0121 06:37:27.850852 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.350837619 +0000 UTC m=+138.147197292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.860474 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6ca48b3-019f-4481-b136-7d392b7073d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.861941 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.865417 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/238fcbbb-ece2-4108-b4be-79ed872e541d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.866017 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-tmpfs\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.870767 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.871432 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-config\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.875857 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.883813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.884367 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e0ca241-c740-42a3-8fd9-970024126d64-service-ca-bundle\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.889249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.889713 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-images\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.893570 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c174a67-522b-4d34-ba66-905ff560f206-config-volume\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.894037 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.894274 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.895248 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc94523-e315-4913-8ea8-ffa72274f5ab-config\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.897494 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.898417 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-apiservice-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.906800 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/238fcbbb-ece2-4108-b4be-79ed872e541d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.909506 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c174a67-522b-4d34-ba66-905ff560f206-metrics-tls\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.909990 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.910120 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.910531 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.912356 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-metrics-certs\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.912760 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.914374 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2jwg\" (UniqueName: \"kubernetes.io/projected/f6ca48b3-019f-4481-b136-7d392b7073d8-kube-api-access-b2jwg\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.915308 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.916331 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc94523-e315-4913-8ea8-ffa72274f5ab-serving-cert\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.916637 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.917066 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-stats-auth\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.919371 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.919910 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f92f014-e88f-4e07-8f20-892e47c5de80-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.920561 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p4428"] Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.922563 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3e0ca241-c740-42a3-8fd9-970024126d64-default-certificate\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934152 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-proxy-tls\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934382 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-webhook-cert\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934768 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6ca48b3-019f-4481-b136-7d392b7073d8-proxy-tls\") pod \"machine-config-controller-84d6567774-9kksl\" (UID: \"f6ca48b3-019f-4481-b136-7d392b7073d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.934985 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.938254 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-srv-cert\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.945382 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4rt\" (UniqueName: \"kubernetes.io/projected/3f92f014-e88f-4e07-8f20-892e47c5de80-kube-api-access-kd4rt\") pod \"package-server-manager-789f6589d5-5zk5l\" (UID: \"3f92f014-e88f-4e07-8f20-892e47c5de80\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.947628 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmd4\" (UniqueName: \"kubernetes.io/projected/19f5b544-ffc3-43fb-b9b4-c319cffd63c5-kube-api-access-fxmd4\") pod \"olm-operator-6b444d44fb-98kwn\" (UID: \"19f5b544-ffc3-43fb-b9b4-c319cffd63c5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.948048 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnmv\" (UniqueName: \"kubernetes.io/projected/9ef3dfdf-4ae9-4baa-a830-e50b4942dd32-kube-api-access-gsnmv\") pod \"migrator-59844c95c7-5l5kl\" (UID: \"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.948698 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xr5\" (UniqueName: \"kubernetes.io/projected/6c174a67-522b-4d34-ba66-905ff560f206-kube-api-access-q6xr5\") pod \"dns-default-bkrnj\" (UID: \"6c174a67-522b-4d34-ba66-905ff560f206\") " pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.950949 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tpw\" (UniqueName: \"kubernetes.io/projected/08d20980-2196-4efa-952e-defded465fb4-kube-api-access-c2tpw\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.950982 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43177e9e-03e6-4864-843a-c753a096648f-cert\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951012 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxz9\" (UniqueName: \"kubernetes.io/projected/3243dc09-2f27-4905-a1cc-08ff6d1e270f-kube-api-access-2rxz9\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951036 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-csi-data-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951106 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-mountpoint-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951122 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-socket-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951156 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-certs\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951176 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-node-bootstrap-token\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951199 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-registration-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951218 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951236 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv67c\" (UniqueName: \"kubernetes.io/projected/43177e9e-03e6-4864-843a-c753a096648f-kube-api-access-cv67c\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951270 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-plugins-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.951535 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-plugins-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.952843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-csi-data-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.952876 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-mountpoint-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.952893 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-registration-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: E0121 06:37:27.953108 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.453095538 +0000 UTC m=+138.249455211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.953991 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3243dc09-2f27-4905-a1cc-08ff6d1e270f-socket-dir\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.954678 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.956327 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-node-bootstrap-token\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.957268 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/08d20980-2196-4efa-952e-defded465fb4-certs\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.961934 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/43177e9e-03e6-4864-843a-c753a096648f-cert\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.977508 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.981925 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1cbeb4c-0b76-4c39-ab17-18085750e8c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-h24hb\" (UID: \"b1cbeb4c-0b76-4c39-ab17-18085750e8c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.987722 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:27 crc kubenswrapper[4913]: I0121 06:37:27.992053 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.000038 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.003893 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9gbf\" (UniqueName: \"kubernetes.io/projected/5fa7fe7b-4999-4a9b-a945-cc404c5467f9-kube-api-access-r9gbf\") pod \"packageserver-d55dfcdfc-vh2n7\" (UID: \"5fa7fe7b-4999-4a9b-a945-cc404c5467f9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.015620 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.028324 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.032117 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.051623 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.051971 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.551947986 +0000 UTC m=+138.348307649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.052289 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.052608 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.552585252 +0000 UTC m=+138.348944925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.079518 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.110540 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q94j\" (UniqueName: \"kubernetes.io/projected/7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53-kube-api-access-6q94j\") pod \"machine-config-operator-74547568cd-pz4xq\" (UID: \"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.113975 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sh9l\" (UniqueName: \"kubernetes.io/projected/238fcbbb-ece2-4108-b4be-79ed872e541d-kube-api-access-7sh9l\") pod \"cluster-image-registry-operator-dc59b4c8b-k2c9f\" (UID: \"238fcbbb-ece2-4108-b4be-79ed872e541d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.126652 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"collect-profiles-29482950-sxbqm\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:28 crc kubenswrapper[4913]: W0121 06:37:28.127748 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod026a670d_684f_4eb6_bda0_bd60294d3b95.slice/crio-d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509 WatchSource:0}: Error finding container d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509: Status 404 returned error can't find the container with id d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509 Jan 21 06:37:28 crc kubenswrapper[4913]: W0121 06:37:28.128431 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd2a9afe_21be_43e4_970d_03daff0713a1.slice/crio-e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab WatchSource:0}: Error finding container e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab: Status 404 returned error can't find the container with id e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.153548 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.153930 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.653911697 +0000 UTC m=+138.450271370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.162234 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcpn\" (UniqueName: \"kubernetes.io/projected/2dc94523-e315-4913-8ea8-ffa72274f5ab-kube-api-access-tzcpn\") pod \"service-ca-operator-777779d784-49vtr\" (UID: \"2dc94523-e315-4913-8ea8-ffa72274f5ab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.199953 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zbxx4\" (UID: \"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.201618 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89ft\" (UniqueName: \"kubernetes.io/projected/e70bbe19-3e5b-4629-b9bf-3c6fc8072836-kube-api-access-b89ft\") pod \"multus-admission-controller-857f4d67dd-l6rtq\" (UID: \"e70bbe19-3e5b-4629-b9bf-3c6fc8072836\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.207357 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.219359 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.226014 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.226982 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ct2\" (UniqueName: \"kubernetes.io/projected/3e0ca241-c740-42a3-8fd9-970024126d64-kube-api-access-77ct2\") pod \"router-default-5444994796-cxnpf\" (UID: \"3e0ca241-c740-42a3-8fd9-970024126d64\") " pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.231043 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.240572 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxz9\" (UniqueName: \"kubernetes.io/projected/3243dc09-2f27-4905-a1cc-08ff6d1e270f-kube-api-access-2rxz9\") pod \"csi-hostpathplugin-jlcqw\" (UID: \"3243dc09-2f27-4905-a1cc-08ff6d1e270f\") " pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.243935 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.248380 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.251300 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tpw\" (UniqueName: \"kubernetes.io/projected/08d20980-2196-4efa-952e-defded465fb4-kube-api-access-c2tpw\") pod \"machine-config-server-kqctf\" (UID: \"08d20980-2196-4efa-952e-defded465fb4\") " pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.254515 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.254972 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.754960343 +0000 UTC m=+138.551320016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.264181 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.268840 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.279812 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.290999 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.304770 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv67c\" (UniqueName: \"kubernetes.io/projected/43177e9e-03e6-4864-843a-c753a096648f-kube-api-access-cv67c\") pod \"ingress-canary-5gjk2\" (UID: \"43177e9e-03e6-4864-843a-c753a096648f\") " pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.313391 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerStarted","Data":"d2c4d6910c3b46358d4156ca5942e4cb6ce137800a796748c637a4c2aa706509"} Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.345605 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerStarted","Data":"e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab"} Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.367728 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5gjk2" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.368448 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.368827 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kqctf" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.369219 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.369465 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.869435328 +0000 UTC m=+138.665795061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.369788 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.370180 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.870146587 +0000 UTC m=+138.666506260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.387010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" event={"ID":"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01","Type":"ContainerStarted","Data":"7c1e9e9e971d4384f838b4139329efb07faf6cb36bd96f30e288e77ef6ff29c2"} Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.405668 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.464709 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.477404 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.477779 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:28.977763919 +0000 UTC m=+138.774123592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.582872 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.584129 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.084116317 +0000 UTC m=+138.880475990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.684844 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.685144 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.185128692 +0000 UTC m=+138.981488365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.785553 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.786355 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.786852 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.286839707 +0000 UTC m=+139.083199380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.798114 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.808019 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6plkm"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.817468 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k6jdd"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.820369 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt"] Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.887152 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.887433 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.387419751 +0000 UTC m=+139.183779424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:28 crc kubenswrapper[4913]: I0121 06:37:28.988091 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:28 crc kubenswrapper[4913]: E0121 06:37:28.988387 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.488376335 +0000 UTC m=+139.284736008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.042196 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8kvjs" podStartSLOduration=120.04218041 podStartE2EDuration="2m0.04218041s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:29.040824674 +0000 UTC m=+138.837184347" watchObservedRunningTime="2026-01-21 06:37:29.04218041 +0000 UTC m=+138.838540083" Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.089065 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.089519 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.589499483 +0000 UTC m=+139.385859156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.178189 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kq7d8"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.190182 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.190434 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.690423456 +0000 UTC m=+139.486783129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.228735 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.238178 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.267288 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.270926 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.277430 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j966n"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.278069 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.292005 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.292302 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.792288575 +0000 UTC m=+139.588648248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.316840 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f95sb"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.324207 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.339320 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bkrnj"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.355944 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.359862 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.368274 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5fgwx"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.370772 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.375937 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.378073 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.380282 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.382565 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k855s"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.393439 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.393715 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:29.89370334 +0000 UTC m=+139.690063013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.503470 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.503626 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.003580743 +0000 UTC m=+139.799940416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.503822 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.504121 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.004106047 +0000 UTC m=+139.800465720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.530775 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq"] Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.604794 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.604985 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.104957548 +0000 UTC m=+139.901317231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.605075 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.605467 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.105454242 +0000 UTC m=+139.901813925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.706389 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.706549 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.206517188 +0000 UTC m=+140.002876871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.706779 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.707171 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.207155705 +0000 UTC m=+140.003515388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.733931 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" podStartSLOduration=120.733901559 podStartE2EDuration="2m0.733901559s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:29.731957797 +0000 UTC m=+139.528317480" watchObservedRunningTime="2026-01-21 06:37:29.733901559 +0000 UTC m=+139.530261262" Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.808546 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.808744 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.308719436 +0000 UTC m=+140.105079109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.808969 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.809268 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.30925977 +0000 UTC m=+140.105619443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.910188 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.910382 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.410344107 +0000 UTC m=+140.206703810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:29 crc kubenswrapper[4913]: I0121 06:37:29.910648 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:29 crc kubenswrapper[4913]: E0121 06:37:29.911021 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.411006485 +0000 UTC m=+140.207366148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.011846 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.012152 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.512103473 +0000 UTC m=+140.308463176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.012624 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.013069 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.513045868 +0000 UTC m=+140.309405581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.043077 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a4e8188_571a_4f41_8665_0565bf75f0d3.slice/crio-20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522 WatchSource:0}: Error finding container 20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522: Status 404 returned error can't find the container with id 20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.056671 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70da4912_d52e_41a4_bf05_91f3f377d243.slice/crio-35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9 WatchSource:0}: Error finding container 35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9: Status 404 returned error can't find the container with id 35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.075025 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ebe95b_4e82_49aa_8693_52c0998ec7de.slice/crio-ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea WatchSource:0}: Error finding container ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea: Status 404 returned error can't find the container with id ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.076144 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ee14186_f787_47f1_8537_8cb2210ac28c.slice/crio-8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4 WatchSource:0}: Error finding container 8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4: Status 404 returned error can't find the container with id 8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.078463 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208b512b_e1b8_4df9_9ec2_0f30bea24a20.slice/crio-0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea WatchSource:0}: Error finding container 0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea: Status 404 returned error can't find the container with id 0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.084421 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e1cc03_984e_4486_8393_f80bc1aa94af.slice/crio-436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3 WatchSource:0}: Error finding container 436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3: Status 404 returned error can't find the container with id 436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.092883 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c174a67_522b_4d34_ba66_905ff560f206.slice/crio-71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1 WatchSource:0}: Error finding container 71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1: Status 404 returned error can't find the container with id 71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.093713 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ca48b3_019f_4481_b136_7d392b7073d8.slice/crio-3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7 WatchSource:0}: Error finding container 3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7: Status 404 returned error can't find the container with id 3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.094172 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e5f1ef_7cb7_4909_beaf_cd352767d0ca.slice/crio-a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf WatchSource:0}: Error finding container a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf: Status 404 returned error can't find the container with id a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.095195 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cdf7744_1629_46a4_b176_0fc75c149a95.slice/crio-5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035 WatchSource:0}: Error finding container 5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035: Status 404 returned error can't find the container with id 5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.096421 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc207fbab_618a_4c01_8450_cb7ffad0f50d.slice/crio-27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa WatchSource:0}: Error finding container 27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa: Status 404 returned error can't find the container with id 27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.113653 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.113905 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.613890769 +0000 UTC m=+140.410250432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.157166 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19f5b544_ffc3_43fb_b9b4_c319cffd63c5.slice/crio-efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023 WatchSource:0}: Error finding container efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023: Status 404 returned error can't find the container with id efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023 Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.159981 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5567f5a_5084_4cc6_b654_f1190dcc0064.slice/crio-d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4 WatchSource:0}: Error finding container d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4: Status 404 returned error can't find the container with id d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4 Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.216417 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.217102 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.717076852 +0000 UTC m=+140.513436535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.317005 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.317345 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.817325368 +0000 UTC m=+140.613685041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.317502 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.317773 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.817762319 +0000 UTC m=+140.614121992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.387523 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7"] Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.413669 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa7fe7b_4999_4a9b_a945_cc404c5467f9.slice/crio-dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f WatchSource:0}: Error finding container dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f: Status 404 returned error can't find the container with id dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.413856 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerStarted","Data":"ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.417626 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" event={"ID":"3dc93a0c-f8e0-4c76-a032-6d3e34878168","Type":"ContainerStarted","Data":"b70c29043f47645e7a927707cc5e0659ec72ac62e6e82635ef7f49a9182b6bba"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.417946 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.418899 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:30.918882818 +0000 UTC m=+140.715242491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.422602 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.424178 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cxnpf" event={"ID":"3e0ca241-c740-42a3-8fd9-970024126d64","Type":"ContainerStarted","Data":"b4f7cf104bd42ce2334382e66b471f7618fde2b6a6aa02252b6a1c874aaa9ac8"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.426897 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" event={"ID":"6cdf7744-1629-46a4-b176-0fc75c149a95","Type":"ContainerStarted","Data":"5aae9cf401d1cd3c11d6879bf5f78b08f2f5971c648755f84925e08aae909035"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.428096 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" event={"ID":"6a4e8188-571a-4f41-8665-0565bf75f0d3","Type":"ContainerStarted","Data":"20fb40a24b9157e58a2ae536d6b63a6c23497f3e8decbd0c007754c38fdb6522"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.429167 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" event={"ID":"57e1cc03-984e-4486-8393-f80bc1aa94af","Type":"ContainerStarted","Data":"436150d2c0d1f41c0ee83fdb6123928bf3e9ec5214026a65502ef3ed7be69ac3"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.430127 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" event={"ID":"70da4912-d52e-41a4-bf05-91f3f377d243","Type":"ContainerStarted","Data":"35622b3550499c291311eec34d3c7b65ee5f9191cd3703bc8c43fc0c9ca4f5c9"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.437660 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" event={"ID":"56b4a4e7-bb42-437e-8dce-70cbc917c7a8","Type":"ContainerStarted","Data":"3e68c3a1be9a8a28795d631d8936218b69cc2a5c00443197a763fed4e1c11829"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.448278 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" event={"ID":"f6ca48b3-019f-4481-b136-7d392b7073d8","Type":"ContainerStarted","Data":"3960c0b85a6a56a5b8f1d3ced894406f2ddfd4a5b34bebe1a9bb0f2cff375bf7"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.453135 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" event={"ID":"c207fbab-618a-4c01-8450-cb7ffad0f50d","Type":"ContainerStarted","Data":"27b480a591be02ef2d41235c0ade420af9ef5517b57dd91fdbe1c2273f5bd0aa"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.454906 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" event={"ID":"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53","Type":"ContainerStarted","Data":"ab2bc05b5e9281fe6e3501e96f755e08c0a90bf042018e5cf4910a8c57b87e25"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.455901 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" event={"ID":"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32","Type":"ContainerStarted","Data":"f7cc373b3ce6921173e1af01fe13eed02614767006785181d331c2b9cac72ba7"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.456795 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" event={"ID":"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0","Type":"ContainerStarted","Data":"5eccfbfcf9ee02278f4a4fde6f6825fe9160fb0648fb92a38fe89f35284a5a21"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.457609 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" event={"ID":"8a371e85-6173-4802-976d-7ee68bc9afdc","Type":"ContainerStarted","Data":"556fdf168eb0c80a931e188b0e37019ad0ee8045096315e083169e23ff52819e"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.458426 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" event={"ID":"48edf52b-d54b-4116-95d0-f8051704a4e3","Type":"ContainerStarted","Data":"c7d0244c9d6a10d64b62347f4e56e737366ec3d8a812987d3da50ab0458bd0ba"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.459161 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerStarted","Data":"cb3977af5e68023242bf0ddc97686fb8058507b9de52582bb7d762e6b09403d5"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.459848 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k6jdd" event={"ID":"08ac51dd-419d-4632-8a49-1972be301121","Type":"ContainerStarted","Data":"1711b26d6a73224ca5472a0d210d668d23572f6048bb46f3e8dbd5ec647c64b5"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.462059 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k855s" event={"ID":"c5567f5a-5084-4cc6-b654-f1190dcc0064","Type":"ContainerStarted","Data":"d09f710d1495fb4fe68095007274c5b103267ce2daa1637f19c0ebea6dfed9e4"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.462792 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" event={"ID":"19f5b544-ffc3-43fb-b9b4-c319cffd63c5","Type":"ContainerStarted","Data":"efcc430f5aed1c08afe1345e34900a057f17e67799a77b9478c4fa393660d023"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.466453 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bkrnj" event={"ID":"6c174a67-522b-4d34-ba66-905ff560f206","Type":"ContainerStarted","Data":"71dfea35d5499283bec2045e677e6d7401d1c4562cdc2a2b1259e4c292a59ff1"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.467709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" event={"ID":"0ee14186-f787-47f1-8537-8cb2210ac28c","Type":"ContainerStarted","Data":"8f919f76c76f8095bc83962f57284e967dc21004fd0df0f0060e5490f305a7e4"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.468431 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" event={"ID":"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca","Type":"ContainerStarted","Data":"a025855fc0dfaee5539cd674f479e3e36ea77567d2cfd7c2c1ad76a57c2cb0cf"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.469008 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" event={"ID":"fdb0c051-dafc-4d42-8c28-d28c049eb0f7","Type":"ContainerStarted","Data":"5c6f98c736472dc2d589a42a06b2aca07cf0ba69da97b9d081ad7bd7c2ec482b"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.469733 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j966n" event={"ID":"208b512b-e1b8-4df9-9ec2-0f30bea24a20","Type":"ContainerStarted","Data":"0a00c5ff9509b053ee8e3e7342df1a920663cf193d8178c4b839ce553bd1afea"} Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.470456 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" event={"ID":"6b1d8220-775c-47a7-a772-00eacc2f957c","Type":"ContainerStarted","Data":"2a114e2897754ab94efde60a2573c389c5c0331b728f55058f0ad97ff789ebce"} Jan 21 06:37:30 crc kubenswrapper[4913]: W0121 06:37:30.479353 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1cbeb4c_0b76_4c39_ab17_18085750e8c2.slice/crio-e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a WatchSource:0}: Error finding container e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a: Status 404 returned error can't find the container with id e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.525445 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.525893 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.025872383 +0000 UTC m=+140.822232156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.626628 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.627197 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.127184406 +0000 UTC m=+140.923544079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.630776 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jlcqw"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.672925 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5gjk2"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.676318 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6rtq"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.729341 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.730298 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.230283918 +0000 UTC m=+141.026643591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.730810 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.733665 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.734148 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.741620 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49vtr"] Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.830639 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.830846 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.330814381 +0000 UTC m=+141.127174044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.830934 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.831685 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.331676843 +0000 UTC m=+141.128036516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:30 crc kubenswrapper[4913]: I0121 06:37:30.933082 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:30 crc kubenswrapper[4913]: E0121 06:37:30.933497 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.43348227 +0000 UTC m=+141.229841943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.035005 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.035776 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.535764089 +0000 UTC m=+141.332123762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.136537 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.136721 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.636693362 +0000 UTC m=+141.433053035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.137033 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.137432 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.637424492 +0000 UTC m=+141.433784165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.241097 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.241691 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.741673254 +0000 UTC m=+141.538032927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.243095 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.243846 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.743835672 +0000 UTC m=+141.540195345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.346043 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.347648 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.847631702 +0000 UTC m=+141.643991375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.449076 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.449418 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:31.949402168 +0000 UTC m=+141.745761841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.475377 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" event={"ID":"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32","Type":"ContainerStarted","Data":"4940ca01f0e9f3f5f5928344156090ac97b6ca4fab74ef91cbc6d10f5c5bf441"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.481476 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" event={"ID":"3f92f014-e88f-4e07-8f20-892e47c5de80","Type":"ContainerStarted","Data":"0f33a02ba3aa1873a3a595d4b5ec68e54bbf6bb50d0d8f13d3798bcc0989ca6d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.485168 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" event={"ID":"5fa7fe7b-4999-4a9b-a945-cc404c5467f9","Type":"ContainerStarted","Data":"3a551da773d0f1c2a076d37775d5dc550ef973cd76947622560b8c9acf60134d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.485205 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" event={"ID":"5fa7fe7b-4999-4a9b-a945-cc404c5467f9","Type":"ContainerStarted","Data":"dd4205466f8e7bce22cafadb041dbd044734595dde8e169d07fd8e9d84d8a60f"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.486552 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.489129 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" event={"ID":"c207fbab-618a-4c01-8450-cb7ffad0f50d","Type":"ContainerStarted","Data":"5478fc68d4b2ffb50974b99c69ac9c15eed505a05bb5171369ef4c48ec7b6f0c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.491052 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerStarted","Data":"8af3a3fe3dfde5bcb547f586da849f79867bd334c534af99562358101ad4a451"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.495552 4913 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vh2n7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.495601 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" podUID="5fa7fe7b-4999-4a9b-a945-cc404c5467f9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.497338 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" event={"ID":"f1c95373-9f04-4ca0-a0a2-0e3e9b8b5ef0","Type":"ContainerStarted","Data":"535a934516ef46b213295aa9268d017b6b175854880d7dee74d5133cb36ec92d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.499568 4913 generic.go:334] "Generic (PLEG): container finished" podID="8a371e85-6173-4802-976d-7ee68bc9afdc" containerID="6d9b217161f73edab65c8877eff569e0764fb365375ecd808afcce9e39dad3cf" exitCode=0 Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.499693 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" event={"ID":"8a371e85-6173-4802-976d-7ee68bc9afdc","Type":"ContainerDied","Data":"6d9b217161f73edab65c8877eff569e0764fb365375ecd808afcce9e39dad3cf"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.502557 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" event={"ID":"48edf52b-d54b-4116-95d0-f8051704a4e3","Type":"ContainerStarted","Data":"6e53fce28553c1fe73b7eb72adf3c7ccae8b3748a0e1b6be34d99c16625b8fb2"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.506878 4913 generic.go:334] "Generic (PLEG): container finished" podID="026a670d-684f-4eb6-bda0-bd60294d3b95" containerID="3f15b55768d29cbd0cd6e4e57fa2a28cfc936a5621df640755559d170798146c" exitCode=0 Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.507150 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerDied","Data":"3f15b55768d29cbd0cd6e4e57fa2a28cfc936a5621df640755559d170798146c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.507618 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" podStartSLOduration=122.50757341 podStartE2EDuration="2m2.50757341s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.505056322 +0000 UTC m=+141.301415995" watchObservedRunningTime="2026-01-21 06:37:31.50757341 +0000 UTC m=+141.303933073" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.512232 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cxnpf" event={"ID":"3e0ca241-c740-42a3-8fd9-970024126d64","Type":"ContainerStarted","Data":"6e48f4877d509b479160f57948aafc4e6c9110080d36e5b323e5f534f7bd06f1"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.515017 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" event={"ID":"56b4a4e7-bb42-437e-8dce-70cbc917c7a8","Type":"ContainerStarted","Data":"1351e3ba3a95a880f41cd7b45d112f72d3ee4396d850c5535fb541ec9cbc0a52"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.520756 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" event={"ID":"57e1cc03-984e-4486-8393-f80bc1aa94af","Type":"ContainerStarted","Data":"6d42f253ccc247388630850dc41781413f5e37fa53158d78d29109889b611ad3"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.530746 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" event={"ID":"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01","Type":"ContainerStarted","Data":"78d1e4adea34683cbee4433143a439394251aa02e7d283454de1ee49c197e873"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.535278 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" event={"ID":"465393d8-5293-482f-8f3b-91578b3ba57b","Type":"ContainerStarted","Data":"ff8925cdf094abcb97fa829780402ee78be44b7db21ff0ab7ba12b6da6c7c207"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.543127 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" event={"ID":"b1cbeb4c-0b76-4c39-ab17-18085750e8c2","Type":"ContainerStarted","Data":"e3d2a3542ecb6b7add310fec4503bf21d22df3479df9f75ed3067ac547f2d35a"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.551316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.554065 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.052878658 +0000 UTC m=+141.849238331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.554172 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"a6af533befa224313080e1746568217d9ccfa07bdf1610e50e4ff9bb4f25d2d0"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.559135 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.563716 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" event={"ID":"70da4912-d52e-41a4-bf05-91f3f377d243","Type":"ContainerStarted","Data":"5feb0b07a4290e83834f632db08e39de35311551b17385edb36bae2ff82e9081"} Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.565939 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.065925127 +0000 UTC m=+141.862284800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.572079 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-cxnpf" podStartSLOduration=122.57205777 podStartE2EDuration="2m2.57205777s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.568396653 +0000 UTC m=+141.364756346" watchObservedRunningTime="2026-01-21 06:37:31.57205777 +0000 UTC m=+141.368417443" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.621416 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kq7d8" podStartSLOduration=122.621399717 podStartE2EDuration="2m2.621399717s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.620056601 +0000 UTC m=+141.416416274" watchObservedRunningTime="2026-01-21 06:37:31.621399717 +0000 UTC m=+141.417759390" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.624149 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerStarted","Data":"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.625166 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.648019 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" event={"ID":"e70bbe19-3e5b-4629-b9bf-3c6fc8072836","Type":"ContainerStarted","Data":"6c7fa0c796a72c63c61994649ddd8916637ad7783de1094fb5742fda5843f240"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.648313 4913 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-b6p62 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.648367 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.659817 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gd6rp" podStartSLOduration=122.659802492 podStartE2EDuration="2m2.659802492s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.658242261 +0000 UTC m=+141.454601934" watchObservedRunningTime="2026-01-21 06:37:31.659802492 +0000 UTC m=+141.456162165" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.660647 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.661873 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.161857317 +0000 UTC m=+141.958216990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.715553 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bkrnj" event={"ID":"6c174a67-522b-4d34-ba66-905ff560f206","Type":"ContainerStarted","Data":"ab26a0d9b5d8c83ce276e743bb692cc8d05157e4c92550c01def6944b8b85c37"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.735763 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k6jdd" event={"ID":"08ac51dd-419d-4632-8a49-1972be301121","Type":"ContainerStarted","Data":"a9f412d7a4c7905dad1e375b9d243183aac250f3169ab9a45532c276b5d6635c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.762389 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.762694 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.262681667 +0000 UTC m=+142.059041340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.765139 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" podStartSLOduration=122.765114492 podStartE2EDuration="2m2.765114492s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.695941757 +0000 UTC m=+141.492301430" watchObservedRunningTime="2026-01-21 06:37:31.765114492 +0000 UTC m=+141.561474165" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.765328 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-k6jdd" podStartSLOduration=122.765323568 podStartE2EDuration="2m2.765323568s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.764523737 +0000 UTC m=+141.560883420" watchObservedRunningTime="2026-01-21 06:37:31.765323568 +0000 UTC m=+141.561683241" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.781461 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" event={"ID":"fdb0c051-dafc-4d42-8c28-d28c049eb0f7","Type":"ContainerStarted","Data":"97e0ea62b166738e080bdd54067240747cef6f3b3ddedf944521f4558b0d795c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.782341 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.789326 4913 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cjqvz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.789378 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" podUID="fdb0c051-dafc-4d42-8c28-d28c049eb0f7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.791562 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k855s" event={"ID":"c5567f5a-5084-4cc6-b654-f1190dcc0064","Type":"ContainerStarted","Data":"6b09aeeb81a9f08eb8da4b70dfab1f0704f83466e82b190a59516cab9ae509cf"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.792522 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.818782 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.818838 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.819065 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerStarted","Data":"a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.819204 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" podStartSLOduration=122.819180685 podStartE2EDuration="2m2.819180685s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.812128097 +0000 UTC m=+141.608487770" watchObservedRunningTime="2026-01-21 06:37:31.819180685 +0000 UTC m=+141.615540358" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.819823 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.826744 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" event={"ID":"2dc94523-e315-4913-8ea8-ffa72274f5ab","Type":"ContainerStarted","Data":"42201769803aa0d96157ebef6e6d45ffa94fd026bf33a004dd48e731774c945b"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.826776 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" event={"ID":"2dc94523-e315-4913-8ea8-ffa72274f5ab","Type":"ContainerStarted","Data":"1c14de190265055f82a69f5a8751fa6dffc9f1fa70d05c6d9aeb4e53ef216116"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.830055 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" event={"ID":"6cdf7744-1629-46a4-b176-0fc75c149a95","Type":"ContainerStarted","Data":"ead66f604f28bf227ff5b5218886f29ecc158249c059f996e3036f13b05dcde3"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.830522 4913 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tbgjj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.830558 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.834240 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k855s" podStartSLOduration=122.834229926 podStartE2EDuration="2m2.834229926s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.831852043 +0000 UTC m=+141.628211716" watchObservedRunningTime="2026-01-21 06:37:31.834229926 +0000 UTC m=+141.630589599" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.839737 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5gjk2" event={"ID":"43177e9e-03e6-4864-843a-c753a096648f","Type":"ContainerStarted","Data":"14cbdff2dc7668af3ab2dd0a78497e678ba114f16da6b252dd8136c52470352d"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.847783 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49vtr" podStartSLOduration=122.847773068 podStartE2EDuration="2m2.847773068s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.847366747 +0000 UTC m=+141.643726420" watchObservedRunningTime="2026-01-21 06:37:31.847773068 +0000 UTC m=+141.644132741" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.848257 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" event={"ID":"3dc93a0c-f8e0-4c76-a032-6d3e34878168","Type":"ContainerStarted","Data":"0cdac816e352c9dd877b1acf1303f083f700337106d006450f7551e6d6ec9f52"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.849560 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" event={"ID":"d6e5f1ef-7cb7-4909-beaf-cd352767d0ca","Type":"ContainerStarted","Data":"943ba7ad8e35481e8cfadba5c3d47bdfd8b4a22d56679068442de4499747eea4"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.850974 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerStarted","Data":"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.851446 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.853439 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" event={"ID":"6b1d8220-775c-47a7-a772-00eacc2f957c","Type":"ContainerStarted","Data":"c15e6ffe791c04b9dd5aeac8c356017e9c089ae3e068b0246741c046fd233d5a"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.855822 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" event={"ID":"f6ca48b3-019f-4481-b136-7d392b7073d8","Type":"ContainerStarted","Data":"1af0f87ecb503f03ba67e76f79f85af671500e787143c9d2de7a6f578e601f3f"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.856155 4913 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qjrx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.856183 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.862896 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.864176 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.364160485 +0000 UTC m=+142.160520158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.875176 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5gzt8" podStartSLOduration=122.875162199 podStartE2EDuration="2m2.875162199s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.861675899 +0000 UTC m=+141.658035572" watchObservedRunningTime="2026-01-21 06:37:31.875162199 +0000 UTC m=+141.671521872" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.876246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j966n" event={"ID":"208b512b-e1b8-4df9-9ec2-0f30bea24a20","Type":"ContainerStarted","Data":"c529394340b2aafb749917221ef09774bc2626294ce56ea6a16c072658905eb4"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.876551 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" podStartSLOduration=122.876545676 podStartE2EDuration="2m2.876545676s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.875739924 +0000 UTC m=+141.672099597" watchObservedRunningTime="2026-01-21 06:37:31.876545676 +0000 UTC m=+141.672905349" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.876848 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.883966 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" event={"ID":"6a4e8188-571a-4f41-8665-0565bf75f0d3","Type":"ContainerStarted","Data":"9aebd14d4dbaa35a1cb5311c634b85c28c58d3bd1a43070485c5f1caba588313"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.884540 4913 patch_prober.go:28] interesting pod/console-operator-58897d9998-j966n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.884571 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j966n" podUID="208b512b-e1b8-4df9-9ec2-0f30bea24a20" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.905319 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" event={"ID":"19f5b544-ffc3-43fb-b9b4-c319cffd63c5","Type":"ContainerStarted","Data":"128f1f950f4cd364364c5e1096100c21869bda7d7c62506adc50637ff1874b83"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.905770 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.908113 4913 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-98kwn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.911147 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" podUID="19f5b544-ffc3-43fb-b9b4-c319cffd63c5" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.912366 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-swrpx" podStartSLOduration=122.912342191 podStartE2EDuration="2m2.912342191s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.893337984 +0000 UTC m=+141.689697657" watchObservedRunningTime="2026-01-21 06:37:31.912342191 +0000 UTC m=+141.708701864" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.913020 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5flq2" podStartSLOduration=122.913013039 podStartE2EDuration="2m2.913013039s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.908416026 +0000 UTC m=+141.704775699" watchObservedRunningTime="2026-01-21 06:37:31.913013039 +0000 UTC m=+141.709372712" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.914010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" event={"ID":"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53","Type":"ContainerStarted","Data":"b77a6978b444a101442820a2957bd55881d6551d6fbd1b9392bc3aedc3033a78"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.921460 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kqctf" event={"ID":"08d20980-2196-4efa-952e-defded465fb4","Type":"ContainerStarted","Data":"a97f5604cb9263066b45a4b81290fb45943d0453d76ce7945059e6e0402bfd66"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.921493 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kqctf" event={"ID":"08d20980-2196-4efa-952e-defded465fb4","Type":"ContainerStarted","Data":"81d3773cd493ed26d7e4d160f443c300fba060350d54806c455daa0987a0f26c"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.924112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" event={"ID":"238fcbbb-ece2-4108-b4be-79ed872e541d","Type":"ContainerStarted","Data":"b6e85abfb4417f892d9c8bb170a26b7a54c694d381352f853bac71caddcfca8a"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.924153 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" event={"ID":"238fcbbb-ece2-4108-b4be-79ed872e541d","Type":"ContainerStarted","Data":"45797279a554551aca6c49898c336c017d121c2b2772bf3ded612f032be7c08f"} Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.932646 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5gjk2" podStartSLOduration=6.932630883 podStartE2EDuration="6.932630883s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.929790967 +0000 UTC m=+141.726150640" watchObservedRunningTime="2026-01-21 06:37:31.932630883 +0000 UTC m=+141.728990556" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.956462 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6plkm" podStartSLOduration=122.956444708 podStartE2EDuration="2m2.956444708s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.953635953 +0000 UTC m=+141.749995626" watchObservedRunningTime="2026-01-21 06:37:31.956444708 +0000 UTC m=+141.752804381" Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.965421 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:31 crc kubenswrapper[4913]: E0121 06:37:31.968389 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.468374286 +0000 UTC m=+142.264734079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:31 crc kubenswrapper[4913]: I0121 06:37:31.978479 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podStartSLOduration=122.978458215 podStartE2EDuration="2m2.978458215s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:31.977640974 +0000 UTC m=+141.774000667" watchObservedRunningTime="2026-01-21 06:37:31.978458215 +0000 UTC m=+141.774817888" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.014760 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kgg79" podStartSLOduration=123.014721523 podStartE2EDuration="2m3.014721523s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.012109163 +0000 UTC m=+141.808468836" watchObservedRunningTime="2026-01-21 06:37:32.014721523 +0000 UTC m=+141.811081196" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.059766 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k2c9f" podStartSLOduration=123.059737704 podStartE2EDuration="2m3.059737704s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.035309952 +0000 UTC m=+141.831669625" watchObservedRunningTime="2026-01-21 06:37:32.059737704 +0000 UTC m=+141.856097367" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.059915 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kqctf" podStartSLOduration=7.059910799 podStartE2EDuration="7.059910799s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.057654849 +0000 UTC m=+141.854014522" watchObservedRunningTime="2026-01-21 06:37:32.059910799 +0000 UTC m=+141.856270472" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.074062 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.074320 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.574306003 +0000 UTC m=+142.370665676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.093410 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" podStartSLOduration=123.093392152 podStartE2EDuration="2m3.093392152s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.075183806 +0000 UTC m=+141.871543479" watchObservedRunningTime="2026-01-21 06:37:32.093392152 +0000 UTC m=+141.889751825" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.110165 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-j966n" podStartSLOduration=123.110136889 podStartE2EDuration="2m3.110136889s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.108218148 +0000 UTC m=+141.904577821" watchObservedRunningTime="2026-01-21 06:37:32.110136889 +0000 UTC m=+141.906496562" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.176250 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.176699 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.676687525 +0000 UTC m=+142.473047198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.273938 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.277466 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.277882 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.777855874 +0000 UTC m=+142.574215567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.300847 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:32 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:32 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:32 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.301108 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.379392 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.379784 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.879773364 +0000 UTC m=+142.676133037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.481306 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.481611 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.981575431 +0000 UTC m=+142.777935094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.481667 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.482015 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:32.982002162 +0000 UTC m=+142.778361825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.583029 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.583499 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.083484881 +0000 UTC m=+142.879844554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.684347 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.684672 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.184661331 +0000 UTC m=+142.981021004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.785646 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.785805 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.285782949 +0000 UTC m=+143.082142622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.786749 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.787058 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.287050313 +0000 UTC m=+143.083409976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.887828 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.888231 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.388217592 +0000 UTC m=+143.184577265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.937954 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" event={"ID":"c207fbab-618a-4c01-8450-cb7ffad0f50d","Type":"ContainerStarted","Data":"45d0dcabf7309955eec07c984211b0153a93c71d4bd539e96746cc726f45ee6e"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.941563 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"c508ed471b09484ed8523f8e356a0f549ffa702c91c1727787bcba35924470f5"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.948900 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" event={"ID":"7fb4a9fd-5070-4dc7-ab30-a70b3b8e4b53","Type":"ContainerStarted","Data":"79935a8189f00e1c2abf01d3af7900b205fa684a847f59a814beb4bf0fa56ab1"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.964556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5gjk2" event={"ID":"43177e9e-03e6-4864-843a-c753a096648f","Type":"ContainerStarted","Data":"d8036201069ab2b54582c6855a8a2f6580544109879d60a237d24962a07d4390"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.966584 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5fgwx" podStartSLOduration=123.966569253 podStartE2EDuration="2m3.966569253s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:32.964993921 +0000 UTC m=+142.761353594" watchObservedRunningTime="2026-01-21 06:37:32.966569253 +0000 UTC m=+142.762928926" Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.983094 4913 generic.go:334] "Generic (PLEG): container finished" podID="0ee14186-f787-47f1-8537-8cb2210ac28c" containerID="f3fe3d7736d8b298c6f2f4d11775d277425b2a3b0b09aa001967af6ef48fa51a" exitCode=0 Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.983376 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" event={"ID":"0ee14186-f787-47f1-8537-8cb2210ac28c","Type":"ContainerDied","Data":"f3fe3d7736d8b298c6f2f4d11775d277425b2a3b0b09aa001967af6ef48fa51a"} Jan 21 06:37:32 crc kubenswrapper[4913]: I0121 06:37:32.989442 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:32 crc kubenswrapper[4913]: E0121 06:37:32.990343 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.490328757 +0000 UTC m=+143.286688430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.033496 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz4xq" podStartSLOduration=124.033479829 podStartE2EDuration="2m4.033479829s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.007547506 +0000 UTC m=+142.803907179" watchObservedRunningTime="2026-01-21 06:37:33.033479829 +0000 UTC m=+142.829839502" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.041408 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" event={"ID":"e70bbe19-3e5b-4629-b9bf-3c6fc8072836","Type":"ContainerStarted","Data":"6743d0850e8e1324f70836fa62f6cf39fb99dbe70ec9e8f52f3477a56e851033"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.041446 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" event={"ID":"e70bbe19-3e5b-4629-b9bf-3c6fc8072836","Type":"ContainerStarted","Data":"a8b52555d9d79ffc19ea69145002ee5080bb823826ac273bd77aefd3171f5aaa"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.088539 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6rtq" podStartSLOduration=124.088522587 podStartE2EDuration="2m4.088522587s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.087027977 +0000 UTC m=+142.883387650" watchObservedRunningTime="2026-01-21 06:37:33.088522587 +0000 UTC m=+142.884882260" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.090261 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.091519 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.591499967 +0000 UTC m=+143.387859640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.093873 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" event={"ID":"6246eefb-0ffa-4a2c-8dd3-27cedaaf8f01","Type":"ContainerStarted","Data":"ad2afa26dfc05cd49f90b9eb7883d9eb727e5bb35328d31056e07b10aa772523"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.123921 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5pm5" podStartSLOduration=125.123888861 podStartE2EDuration="2m5.123888861s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.123884631 +0000 UTC m=+142.920244304" watchObservedRunningTime="2026-01-21 06:37:33.123888861 +0000 UTC m=+142.920248544" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.140830 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" event={"ID":"9ef3dfdf-4ae9-4baa-a830-e50b4942dd32","Type":"ContainerStarted","Data":"d02d718c837a8f7099917d6936060ea5ae138fe7c06ab78e09e2967d3c09c62e"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.149134 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" event={"ID":"3f92f014-e88f-4e07-8f20-892e47c5de80","Type":"ContainerStarted","Data":"05dc613cfe664b4e4cdc251c60c45c9097c81d35531a511a12c398f60f81b38e"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.149348 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" event={"ID":"3f92f014-e88f-4e07-8f20-892e47c5de80","Type":"ContainerStarted","Data":"3cae0f6b28083d9730399cd3e1b262b23482281f8d43eb8e6bbc1e03316e9f99"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.150044 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.174431 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5l5kl" podStartSLOduration=124.17441745 podStartE2EDuration="2m4.17441745s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.173058604 +0000 UTC m=+142.969418287" watchObservedRunningTime="2026-01-21 06:37:33.17441745 +0000 UTC m=+142.970777113" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.190896 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bkrnj" event={"ID":"6c174a67-522b-4d34-ba66-905ff560f206","Type":"ContainerStarted","Data":"66ef69324bb27af3b448434137010eea4e696b5b09913e45883abdf29604d80e"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.191203 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.191886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.193036 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.693023696 +0000 UTC m=+143.489383369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.202569 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" podStartSLOduration=124.202553421 podStartE2EDuration="2m4.202553421s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.200260979 +0000 UTC m=+142.996620652" watchObservedRunningTime="2026-01-21 06:37:33.202553421 +0000 UTC m=+142.998913094" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.204578 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" event={"ID":"57e1cc03-984e-4486-8393-f80bc1aa94af","Type":"ContainerStarted","Data":"f75d6d7d12b81f3561db9ca46abfdeb053f04afcb38d6a670b580f528e71a92d"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.220582 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" event={"ID":"48edf52b-d54b-4116-95d0-f8051704a4e3","Type":"ContainerStarted","Data":"e7514f43c271b4b19d20438633b628a8cc842725b49470e5f5aa9cb3fafe2297"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.226945 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bkrnj" podStartSLOduration=8.226931571 podStartE2EDuration="8.226931571s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.225938094 +0000 UTC m=+143.022297757" watchObservedRunningTime="2026-01-21 06:37:33.226931571 +0000 UTC m=+143.023291244" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.244246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" event={"ID":"8a371e85-6173-4802-976d-7ee68bc9afdc","Type":"ContainerStarted","Data":"6f5961288fb63653b054ead61f727c1e9dd9cf60e59a3b39f0bcb04f8c7b408f"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.244513 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.261800 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" event={"ID":"465393d8-5293-482f-8f3b-91578b3ba57b","Type":"ContainerStarted","Data":"e2878df36263cb8bb182118daa50ab3d3c75f9571223b72f6b37e78c27738677"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.261848 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" event={"ID":"465393d8-5293-482f-8f3b-91578b3ba57b","Type":"ContainerStarted","Data":"b9785da6eaa920126af79b7c788237c0592086a5842665c63b3e7789ce9793cd"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.276738 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:33 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:33 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:33 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.276836 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.278208 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerStarted","Data":"738d06bd8d6f5c4238e8c2c76b50d3599fa637b42354f2fb64db79311e1e4868"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.285359 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerStarted","Data":"dde2f74ae22d10dbe900066233ebccc8b5fb81dc49f3502b013b8f80fb564388"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.285402 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p4428" event={"ID":"026a670d-684f-4eb6-bda0-bd60294d3b95","Type":"ContainerStarted","Data":"9a8b45479c822246c3eaf9eef7a23feeb6fa1b2603e68390e7e90590fcc469e0"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.293768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.294174 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f95sb" podStartSLOduration=124.294159425 podStartE2EDuration="2m4.294159425s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.261920654 +0000 UTC m=+143.058280327" watchObservedRunningTime="2026-01-21 06:37:33.294159425 +0000 UTC m=+143.090519098" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.295063 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.795041909 +0000 UTC m=+143.591401582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.342864 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" event={"ID":"f6ca48b3-019f-4481-b136-7d392b7073d8","Type":"ContainerStarted","Data":"cc46aa6d0c5e46d72ccbfa48a20fefeb1f52df3122972a22d9d4ce26dd03a630"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.344199 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-5p75s" podStartSLOduration=124.34418575 podStartE2EDuration="2m4.34418575s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.294395521 +0000 UTC m=+143.090755194" watchObservedRunningTime="2026-01-21 06:37:33.34418575 +0000 UTC m=+143.140545413" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.344498 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4bs7p" podStartSLOduration=124.344491838 podStartE2EDuration="2m4.344491838s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.342550367 +0000 UTC m=+143.138910030" watchObservedRunningTime="2026-01-21 06:37:33.344491838 +0000 UTC m=+143.140851511" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.365730 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" event={"ID":"b1cbeb4c-0b76-4c39-ab17-18085750e8c2","Type":"ContainerStarted","Data":"157bb01bcd8a30b632547d11c3c7863ac2466c9b886b1eeafd1b770eaeaaa8c5"} Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.366871 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.366910 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.367370 4913 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qjrx8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.367392 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.388874 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.402302 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cjqvz" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.406165 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" podStartSLOduration=124.406148643 podStartE2EDuration="2m4.406148643s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.365429447 +0000 UTC m=+143.161789120" watchObservedRunningTime="2026-01-21 06:37:33.406148643 +0000 UTC m=+143.202508316" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.406985 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-98kwn" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.409221 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.411216 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vh2n7" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.435799 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:33.935783844 +0000 UTC m=+143.732143517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.443291 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-p4428" podStartSLOduration=124.443277734 podStartE2EDuration="2m4.443277734s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.442411071 +0000 UTC m=+143.238770744" watchObservedRunningTime="2026-01-21 06:37:33.443277734 +0000 UTC m=+143.239637407" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.474644 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j966n" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.512311 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.514237 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.014216737 +0000 UTC m=+143.810576410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.563676 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" podStartSLOduration=124.563658487 podStartE2EDuration="2m4.563658487s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.537074938 +0000 UTC m=+143.333434611" watchObservedRunningTime="2026-01-21 06:37:33.563658487 +0000 UTC m=+143.360018160" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.594984 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.616263 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.616536 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.116524187 +0000 UTC m=+143.912883860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.665162 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9kksl" podStartSLOduration=124.665145885 podStartE2EDuration="2m4.665145885s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.606331725 +0000 UTC m=+143.402691398" watchObservedRunningTime="2026-01-21 06:37:33.665145885 +0000 UTC m=+143.461505558" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.698209 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-h24hb" podStartSLOduration=124.698196597 podStartE2EDuration="2m4.698196597s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.696030619 +0000 UTC m=+143.492390292" watchObservedRunningTime="2026-01-21 06:37:33.698196597 +0000 UTC m=+143.494556270" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.719329 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.719462 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.219439654 +0000 UTC m=+144.015799317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.719540 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.719888 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.219881555 +0000 UTC m=+144.016241228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.800912 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.801800 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.823830 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824050 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.824095 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.324069326 +0000 UTC m=+144.120428999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824157 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824225 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.824250 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.824693 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.324687312 +0000 UTC m=+144.121046985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.837102 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.864707 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.924805 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.924987 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: E0121 06:37:33.925140 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.425113042 +0000 UTC m=+144.221472715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.925324 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.925737 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.925806 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.926055 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.930556 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zbxx4" podStartSLOduration=124.930539337 podStartE2EDuration="2m4.930539337s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:33.900919506 +0000 UTC m=+143.697279179" watchObservedRunningTime="2026-01-21 06:37:33.930539337 +0000 UTC m=+143.726899010" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.960483 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"community-operators-mvlq6\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.963453 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.972314 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.980104 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:37:33 crc kubenswrapper[4913]: I0121 06:37:33.993579 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036454 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036497 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036522 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.036572 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.036904 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.536889855 +0000 UTC m=+144.333249528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.118860 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139545 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139746 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139778 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.139836 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.140301 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.140665 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.640651354 +0000 UTC m=+144.437011027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.140870 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.158874 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.159784 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.169856 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"certified-operators-ffbwk\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.177623 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.194111 4913 csr.go:261] certificate signing request csr-jv8rl is approved, waiting to be issued Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.203827 4913 csr.go:257] certificate signing request csr-jv8rl is issued Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251458 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251490 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251534 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.251561 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.251841 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.75182961 +0000 UTC m=+144.548189283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.278485 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:34 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:34 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:34 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.278529 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.313688 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.352455 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.352947 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.352967 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.353011 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.353691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.353727 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.853697509 +0000 UTC m=+144.650057182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.354158 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.369674 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.371257 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.407176 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"community-operators-rm75l\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.411631 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.449522 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" event={"ID":"0ee14186-f787-47f1-8537-8cb2210ac28c","Type":"ContainerStarted","Data":"bb80fdbcab8c188c037de2c17ba9f3d47ee1a8f40de7d45b21922035248e39b5"} Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457296 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457326 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457349 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.457415 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.458250 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:34.958240978 +0000 UTC m=+144.754600651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.499568 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"5575122f01690d43e5471873512af32675a183059f2884ee3b59f362a09960fe"} Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.500031 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.500064 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.518166 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.520532 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.548113 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" podStartSLOduration=125.548095516 podStartE2EDuration="2m5.548095516s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:34.478430067 +0000 UTC m=+144.274789740" watchObservedRunningTime="2026-01-21 06:37:34.548095516 +0000 UTC m=+144.344455189" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.564300 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.564388 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.06436907 +0000 UTC m=+144.860728753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.565200 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.565652 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.566422 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.566510 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.569984 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.06997048 +0000 UTC m=+144.866330143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.580033 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.587216 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.669064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.669490 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.169475605 +0000 UTC m=+144.965835268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.669489 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"certified-operators-fszdj\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.711827 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.771672 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.772125 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.272114464 +0000 UTC m=+145.068474137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.803757 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.880095 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.880233 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.380206129 +0000 UTC m=+145.176565802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.880363 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.880670 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.3806579 +0000 UTC m=+145.177017573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.936836 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:37:34 crc kubenswrapper[4913]: I0121 06:37:34.981164 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:34 crc kubenswrapper[4913]: E0121 06:37:34.981486 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.481472011 +0000 UTC m=+145.277831684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.044996 4913 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.082933 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.083292 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.583279798 +0000 UTC m=+145.379639471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.163381 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.186553 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.186864 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.686849781 +0000 UTC m=+145.483209454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.205447 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 06:32:34 +0000 UTC, rotation deadline is 2026-12-08 09:42:02.361769364 +0000 UTC Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.205469 4913 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7707h4m27.156302006s for next certificate rotation Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.275143 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:35 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:35 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:35 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.275194 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.285519 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.290432 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.290884 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.790869317 +0000 UTC m=+145.587229000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.394181 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.394408 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.894372019 +0000 UTC m=+145.690731692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.495553 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.495973 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:35.99595201 +0000 UTC m=+145.792311693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.504701 4913 generic.go:334] "Generic (PLEG): container finished" podID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" exitCode=0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.504755 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.504802 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerStarted","Data":"56ab7cdf728ac690777654ae4eaf5e6fc42307f0dee5ce8045bb907e80f0f634"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506035 4913 generic.go:334] "Generic (PLEG): container finished" podID="f2b20a33-f426-426f-9657-3d11d403629f" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" exitCode=0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506102 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506134 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerStarted","Data":"6a8e2ac63fb84aa47578d17a8198d55bdad0c3fb7a2896b7a8bd7e3526aa7149"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.506317 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.507716 4913 generic.go:334] "Generic (PLEG): container finished" podID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" exitCode=0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.507768 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.507793 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerStarted","Data":"eb2a4164400078d5e47383eb8825b8a46cafb4407ff81311bae02795bf3351aa"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.513334 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"dbf01b660dcba306070351ff37808ae03dc308207dd1d10705edfc214b219fbb"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.517004 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerStarted","Data":"2b2da556d8d5ceb79d9f0ad50be41dd604bef4e604d018fead743630456fc287"} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.597325 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.597408 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.097393877 +0000 UTC m=+145.893753540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.599548 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: E0121 06:37:35.599778 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 06:37:36.09976329 +0000 UTC m=+145.896122963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-78wqc" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.696378 4913 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T06:37:35.045015787Z","Handler":null,"Name":""} Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.699276 4913 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.699315 4913 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.700973 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.739566 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.746478 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.747837 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.750735 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.757519 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802847 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802906 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802962 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.802983 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.805053 4913 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.805091 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.833558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-78wqc\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.903790 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.903897 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.903927 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.904250 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.904683 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:35 crc kubenswrapper[4913]: I0121 06:37:35.927155 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"redhat-marketplace-jlb56\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.000228 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.069765 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.146065 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.147460 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.160948 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.208439 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.208503 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.209949 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.274972 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:36 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:36 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:36 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.275097 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.311733 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.311786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.311884 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.313738 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.315395 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.328498 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"redhat-marketplace-lpkw9\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.343297 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:37:36 crc kubenswrapper[4913]: W0121 06:37:36.350200 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2255f06f_74ad_4308_9575_c04f8c24d4d5.slice/crio-9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883 WatchSource:0}: Error finding container 9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883: Status 404 returned error can't find the container with id 9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.415109 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.453071 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:37:36 crc kubenswrapper[4913]: W0121 06:37:36.458071 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf46fd64f_46cb_4464_8f26_6df55bf77ba1.slice/crio-608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5 WatchSource:0}: Error finding container 608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5: Status 404 returned error can't find the container with id 608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.465372 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.510480 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.511216 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.512814 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.513129 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.514939 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.516232 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.516273 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.516335 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.527058 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.527464 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.532112 4913 generic.go:334] "Generic (PLEG): container finished" podID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" exitCode=0 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.535229 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.542689 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.542805 4913 generic.go:334] "Generic (PLEG): container finished" podID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerID="738d06bd8d6f5c4238e8c2c76b50d3599fa637b42354f2fb64db79311e1e4868" exitCode=0 Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543262 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543292 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerStarted","Data":"9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543308 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerStarted","Data":"608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.543320 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerDied","Data":"738d06bd8d6f5c4238e8c2c76b50d3599fa637b42354f2fb64db79311e1e4868"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.551630 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.554256 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" event={"ID":"3243dc09-2f27-4905-a1cc-08ff6d1e270f","Type":"ContainerStarted","Data":"0e0ca49b65d42705efaa888df39ebeec9faba1fa3577e89244b08af4547e4a1e"} Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.569880 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.586671 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jlcqw" podStartSLOduration=11.586654236 podStartE2EDuration="11.586654236s" podCreationTimestamp="2026-01-21 06:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:36.579791752 +0000 UTC m=+146.376151445" watchObservedRunningTime="2026-01-21 06:37:36.586654236 +0000 UTC m=+146.383013909" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.601513 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jn7zt" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.628555 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.628705 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.646459 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.733820 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.733917 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.734051 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.750522 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: W0121 06:37:36.781929 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a WatchSource:0}: Error finding container 0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a: Status 404 returned error can't find the container with id 0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.828465 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.834761 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.839428 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/60ed8982-ee20-4330-861f-61509c39bbe7-metrics-certs\") pod \"network-metrics-daemon-wfcsc\" (UID: \"60ed8982-ee20-4330-861f-61509c39bbe7\") " pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.882129 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wfcsc" Jan 21 06:37:36 crc kubenswrapper[4913]: I0121 06:37:36.945243 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.031473 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.048759 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.108044 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda20e6104_9ef6_4f62_990b_e0d660e5b5c4.slice/crio-f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a WatchSource:0}: Error finding container f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a: Status 404 returned error can't find the container with id f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.112766 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7521412f_3363_4617_9740_9dd9124df38e.slice/crio-4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015 WatchSource:0}: Error finding container 4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015: Status 404 returned error can't find the container with id 4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.148254 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.149346 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.152777 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.156558 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.160357 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32 WatchSource:0}: Error finding container 368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32: Status 404 returned error can't find the container with id 368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.244584 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.244695 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.244715 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.278796 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:37 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:37 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:37 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.279087 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.279254 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wfcsc"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.348111 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.347011 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.348772 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.348804 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.349707 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.368814 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"redhat-operators-hpc4m\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: W0121 06:37:37.396181 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1 WatchSource:0}: Error finding container d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1: Status 404 returned error can't find the container with id d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.435644 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.435719 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.442185 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.472647 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.488865 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.488901 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.491009 4913 patch_prober.go:28] interesting pod/console-f9d7485db-k6jdd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.491049 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k6jdd" podUID="08ac51dd-419d-4632-8a49-1972be301121" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.552364 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.554192 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.557292 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.581420 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerStarted","Data":"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.582169 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.596010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a20e6104-9ef6-4f62-990b-e0d660e5b5c4","Type":"ContainerStarted","Data":"f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.605665 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" podStartSLOduration=128.605569495 podStartE2EDuration="2m8.605569495s" podCreationTimestamp="2026-01-21 06:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:37.599146914 +0000 UTC m=+147.395506587" watchObservedRunningTime="2026-01-21 06:37:37.605569495 +0000 UTC m=+147.401929168" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.605792 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d056494a249f4a754c5a2de915bb434c814508d90542831dc147ccf4340518d1"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.611180 4913 generic.go:334] "Generic (PLEG): container finished" podID="7521412f-3363-4617-9740-9dd9124df38e" containerID="93e15fb5b03e79a08467b762e78a24c070dcc8c24e8f33b03e16ab6662aedb40" exitCode=0 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.611256 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"93e15fb5b03e79a08467b762e78a24c070dcc8c24e8f33b03e16ab6662aedb40"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.611281 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerStarted","Data":"4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.615431 4913 generic.go:334] "Generic (PLEG): container finished" podID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" exitCode=0 Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.615498 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.623428 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.623465 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.627064 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" event={"ID":"60ed8982-ee20-4330-861f-61509c39bbe7","Type":"ContainerStarted","Data":"1888ea49e90b59ad215144de24326b4ef5ee1d769608893bb4af01982a17bb80"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.631036 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b0dbfaddaa6ec33143142e8918c05587640ff5e2a874ef83645d5bb9f7145e8f"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.631100 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0820aba1bec7a1a7ead920ea5c4485fd89a90489aebee47413206def4337584a"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.640656 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"61765a860322b4c086d638b92e091fc73dae3b3b3faacac51fdba49fb79bff32"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.640772 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.640787 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"368f250ed7a2c79a768190161ff2aa22bfcef5675332f95b7c6ca3f52816cd32"} Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.641150 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.652726 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-p4428" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.654521 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.654564 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.654705 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.759146 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.759303 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.759362 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.760736 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.760948 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809271 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809322 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809333 4913 patch_prober.go:28] interesting pod/downloads-7954f5f757-k855s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.809393 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k855s" podUID="c5567f5a-5084-4cc6-b654-f1190dcc0064" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.815033 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"redhat-operators-pkwc2\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:37 crc kubenswrapper[4913]: I0121 06:37:37.913912 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.104389 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.270024 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.281907 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:38 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:38 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:38 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.281951 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.296443 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.319303 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.319627 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.375067 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") pod \"0a7775c5-ca46-4ab1-b4e1-96c818301059\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.375136 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") pod \"0a7775c5-ca46-4ab1-b4e1-96c818301059\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.375234 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") pod \"0a7775c5-ca46-4ab1-b4e1-96c818301059\" (UID: \"0a7775c5-ca46-4ab1-b4e1-96c818301059\") " Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.376769 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a7775c5-ca46-4ab1-b4e1-96c818301059" (UID: "0a7775c5-ca46-4ab1-b4e1-96c818301059"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.382437 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a7775c5-ca46-4ab1-b4e1-96c818301059" (UID: "0a7775c5-ca46-4ab1-b4e1-96c818301059"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.386697 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg" (OuterVolumeSpecName: "kube-api-access-97qxg") pod "0a7775c5-ca46-4ab1-b4e1-96c818301059" (UID: "0a7775c5-ca46-4ab1-b4e1-96c818301059"). InnerVolumeSpecName "kube-api-access-97qxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.476885 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97qxg\" (UniqueName: \"kubernetes.io/projected/0a7775c5-ca46-4ab1-b4e1-96c818301059-kube-api-access-97qxg\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.476929 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7775c5-ca46-4ab1-b4e1-96c818301059-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.476938 4913 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a7775c5-ca46-4ab1-b4e1-96c818301059-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.606373 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.646421 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" event={"ID":"60ed8982-ee20-4330-861f-61509c39bbe7","Type":"ContainerStarted","Data":"be846a9ebb7209b98d835a31a04774bb1797438fc70428969ef1f316fb98ba32"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.646472 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wfcsc" event={"ID":"60ed8982-ee20-4330-861f-61509c39bbe7","Type":"ContainerStarted","Data":"6bde92ee8efa1aa43a2ea494fcc9e1c3e7d5da9354502eac8a4ec5892221663e"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.648683 4913 generic.go:334] "Generic (PLEG): container finished" podID="d976374c-9adc-426a-9593-43e617e72281" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" exitCode=0 Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.648728 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.648743 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerStarted","Data":"cd166342c5c7d3828aa55b99bbc4cb3c9d3bdf94c3c49466b8128a155f8f51f9"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.651166 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" event={"ID":"0a7775c5-ca46-4ab1-b4e1-96c818301059","Type":"ContainerDied","Data":"8af3a3fe3dfde5bcb547f586da849f79867bd334c534af99562358101ad4a451"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.651198 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af3a3fe3dfde5bcb547f586da849f79867bd334c534af99562358101ad4a451" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.651235 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482950-sxbqm" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.657443 4913 generic.go:334] "Generic (PLEG): container finished" podID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerID="afa1950ea9d1488c78126bf10cbd77be0b61730b70277423566008c4a2b19495" exitCode=0 Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.657521 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a20e6104-9ef6-4f62-990b-e0d660e5b5c4","Type":"ContainerDied","Data":"afa1950ea9d1488c78126bf10cbd77be0b61730b70277423566008c4a2b19495"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.661047 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wfcsc" podStartSLOduration=130.66102891 podStartE2EDuration="2m10.66102891s" podCreationTimestamp="2026-01-21 06:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:38.659218902 +0000 UTC m=+148.455578575" watchObservedRunningTime="2026-01-21 06:37:38.66102891 +0000 UTC m=+148.457388583" Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.672090 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"da0bb8f08d9b375c9f1e359a284b3459ccad7175074392fc51a7e212cf1c22e7"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.676480 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerStarted","Data":"8f5064bd05054c2b02632229ded6fedcb4045b72cf1e85d3555133283a45b0c3"} Jan 21 06:37:38 crc kubenswrapper[4913]: I0121 06:37:38.685569 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-b8vxc" Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.275173 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:39 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:39 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:39 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.275472 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.695673 4913 generic.go:334] "Generic (PLEG): container finished" podID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerID="4536887c323df45fbc4166635e0604a06736c4d0fb3091dd1489a3822a0f1cf4" exitCode=0 Jan 21 06:37:39 crc kubenswrapper[4913]: I0121 06:37:39.695793 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"4536887c323df45fbc4166635e0604a06736c4d0fb3091dd1489a3822a0f1cf4"} Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.080825 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.222994 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") pod \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.223102 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a20e6104-9ef6-4f62-990b-e0d660e5b5c4" (UID: "a20e6104-9ef6-4f62-990b-e0d660e5b5c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.223143 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") pod \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\" (UID: \"a20e6104-9ef6-4f62-990b-e0d660e5b5c4\") " Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.223337 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.227876 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a20e6104-9ef6-4f62-990b-e0d660e5b5c4" (UID: "a20e6104-9ef6-4f62-990b-e0d660e5b5c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.272412 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:40 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:40 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:40 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.272469 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.324179 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a20e6104-9ef6-4f62-990b-e0d660e5b5c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.782968 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a20e6104-9ef6-4f62-990b-e0d660e5b5c4","Type":"ContainerDied","Data":"f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a"} Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.783002 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b79c18135699afbde101bb3aceb28268b68eef70e03f30a3be4101b6c7928a" Jan 21 06:37:40 crc kubenswrapper[4913]: I0121 06:37:40.783026 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 06:37:41 crc kubenswrapper[4913]: I0121 06:37:41.271352 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:41 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:41 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:41 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:41 crc kubenswrapper[4913]: I0121 06:37:41.271418 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.273277 4913 patch_prober.go:28] interesting pod/router-default-5444994796-cxnpf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 06:37:42 crc kubenswrapper[4913]: [-]has-synced failed: reason withheld Jan 21 06:37:42 crc kubenswrapper[4913]: [+]process-running ok Jan 21 06:37:42 crc kubenswrapper[4913]: healthz check failed Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.273347 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cxnpf" podUID="3e0ca241-c740-42a3-8fd9-970024126d64" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.519208 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:37:42 crc kubenswrapper[4913]: E0121 06:37:42.520040 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerName="collect-profiles" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520061 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerName="collect-profiles" Jan 21 06:37:42 crc kubenswrapper[4913]: E0121 06:37:42.520079 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerName="pruner" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520086 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerName="pruner" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520182 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7775c5-ca46-4ab1-b4e1-96c818301059" containerName="collect-profiles" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520196 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20e6104-9ef6-4f62-990b-e0d660e5b5c4" containerName="pruner" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.520645 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.522410 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.522561 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.542960 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.655672 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.655718 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.756682 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.756786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.756919 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.773691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.843080 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:37:42 crc kubenswrapper[4913]: I0121 06:37:42.844701 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.038898 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bkrnj" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.199041 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 06:37:43 crc kubenswrapper[4913]: W0121 06:37:43.266497 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1319231d_e415_4c56_a0e2_7584edddc7e4.slice/crio-c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482 WatchSource:0}: Error finding container c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482: Status 404 returned error can't find the container with id c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482 Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.274777 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.277787 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-cxnpf" Jan 21 06:37:43 crc kubenswrapper[4913]: I0121 06:37:43.828014 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerStarted","Data":"c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482"} Jan 21 06:37:44 crc kubenswrapper[4913]: I0121 06:37:44.834675 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerStarted","Data":"f373a6052d782241dd48fe4cf32a660a6c768adbd887f17e18f4463268526fd8"} Jan 21 06:37:44 crc kubenswrapper[4913]: I0121 06:37:44.852389 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.852373817 podStartE2EDuration="2.852373817s" podCreationTimestamp="2026-01-21 06:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:37:44.84910836 +0000 UTC m=+154.645468033" watchObservedRunningTime="2026-01-21 06:37:44.852373817 +0000 UTC m=+154.648733490" Jan 21 06:37:45 crc kubenswrapper[4913]: I0121 06:37:45.842970 4913 generic.go:334] "Generic (PLEG): container finished" podID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerID="f373a6052d782241dd48fe4cf32a660a6c768adbd887f17e18f4463268526fd8" exitCode=0 Jan 21 06:37:45 crc kubenswrapper[4913]: I0121 06:37:45.843107 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerDied","Data":"f373a6052d782241dd48fe4cf32a660a6c768adbd887f17e18f4463268526fd8"} Jan 21 06:37:47 crc kubenswrapper[4913]: I0121 06:37:47.492026 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:47 crc kubenswrapper[4913]: I0121 06:37:47.495649 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-k6jdd" Jan 21 06:37:47 crc kubenswrapper[4913]: I0121 06:37:47.811958 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k855s" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.872505 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1319231d-e415-4c56-a0e2-7584edddc7e4","Type":"ContainerDied","Data":"c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482"} Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.872980 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5fb770785013d0df45404893a2431e18b466f0a2b22cfa0c7b4e91594717482" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.893777 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997295 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") pod \"1319231d-e415-4c56-a0e2-7584edddc7e4\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997369 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") pod \"1319231d-e415-4c56-a0e2-7584edddc7e4\" (UID: \"1319231d-e415-4c56-a0e2-7584edddc7e4\") " Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997424 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1319231d-e415-4c56-a0e2-7584edddc7e4" (UID: "1319231d-e415-4c56-a0e2-7584edddc7e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:37:50 crc kubenswrapper[4913]: I0121 06:37:50.997746 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1319231d-e415-4c56-a0e2-7584edddc7e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:51 crc kubenswrapper[4913]: I0121 06:37:51.003670 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1319231d-e415-4c56-a0e2-7584edddc7e4" (UID: "1319231d-e415-4c56-a0e2-7584edddc7e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:37:51 crc kubenswrapper[4913]: I0121 06:37:51.098868 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1319231d-e415-4c56-a0e2-7584edddc7e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:37:51 crc kubenswrapper[4913]: I0121 06:37:51.878077 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 06:37:51 crc kubenswrapper[4913]: E0121 06:37:51.956322 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod1319231d_e415_4c56_a0e2_7584edddc7e4.slice\": RecentStats: unable to find data in memory cache]" Jan 21 06:37:56 crc kubenswrapper[4913]: I0121 06:37:56.008719 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:38:08 crc kubenswrapper[4913]: I0121 06:38:08.243345 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5zk5l" Jan 21 06:38:08 crc kubenswrapper[4913]: I0121 06:38:08.319702 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:38:08 crc kubenswrapper[4913]: I0121 06:38:08.319784 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.325504 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:38:16 crc kubenswrapper[4913]: E0121 06:38:16.326460 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerName="pruner" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.326487 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerName="pruner" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.326813 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="1319231d-e415-4c56-a0e2-7584edddc7e4" containerName="pruner" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.328650 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.332711 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.333816 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.341129 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.510222 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.510356 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.611693 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.611870 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.612098 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.644971 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:16 crc kubenswrapper[4913]: I0121 06:38:16.662451 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:17 crc kubenswrapper[4913]: I0121 06:38:17.564506 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 06:38:20 crc kubenswrapper[4913]: E0121 06:38:20.031407 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 06:38:20 crc kubenswrapper[4913]: E0121 06:38:20.032034 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9dfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rm75l_openshift-marketplace(be61dd34-8d4d-4525-8187-3c21f22cd88a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:20 crc kubenswrapper[4913]: E0121 06:38:20.033280 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rm75l" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.514388 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.515326 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.522055 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.677273 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.677841 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.678271 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780091 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780153 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780219 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780226 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.780343 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.799571 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"installer-9-crc\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:20 crc kubenswrapper[4913]: I0121 06:38:20.838412 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:38:23 crc kubenswrapper[4913]: E0121 06:38:23.257909 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 06:38:23 crc kubenswrapper[4913]: E0121 06:38:23.258405 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggtzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mvlq6_openshift-marketplace(f2b20a33-f426-426f-9657-3d11d403629f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:23 crc kubenswrapper[4913]: E0121 06:38:23.259888 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mvlq6" podUID="f2b20a33-f426-426f-9657-3d11d403629f" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.234152 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rm75l" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.234169 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mvlq6" podUID="f2b20a33-f426-426f-9657-3d11d403629f" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.326785 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.327096 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9zd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ffbwk_openshift-marketplace(92ab7368-d5ff-4ecc-846a-96791a313bce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:24 crc kubenswrapper[4913]: E0121 06:38:24.328411 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.771097 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.888779 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.888997 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pkwc2_openshift-marketplace(14e729d1-3cb1-49d7-b34f-d997333ec65f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:27 crc kubenswrapper[4913]: E0121 06:38:27.890248 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.072682 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.317182 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.317318 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-956sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jlb56_openshift-marketplace(2255f06f-74ad-4308-9575-c04f8c24d4d5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.318484 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jlb56" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" Jan 21 06:38:29 crc kubenswrapper[4913]: I0121 06:38:29.500981 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 06:38:29 crc kubenswrapper[4913]: I0121 06:38:29.577074 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.606552 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.606797 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82n2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fszdj_openshift-marketplace(b5a378fe-18a6-4be0-8d56-eaddc377bd8b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:29 crc kubenswrapper[4913]: E0121 06:38:29.615744 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fszdj" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.042052 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.042648 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bn5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hpc4m_openshift-marketplace(d976374c-9adc-426a-9593-43e617e72281): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.044224 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.113130 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerStarted","Data":"0b206ec76a91256c0c91606cbe0925f94e7fbd4e7b6b747641a151b3beb320e9"} Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.113177 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerStarted","Data":"5cfc103b743cf4cd9f52146d725a2c25d6e49ba42c9012d9ddde5cfdedf47ef3"} Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.115578 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerStarted","Data":"71ed111272da72dc9748856e43d4d8004c600de64ea2bf42ebc31c9d37d07f49"} Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.115644 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerStarted","Data":"f6d03d89bb40acdb0afb436d8c01cd40d38c5fb443dddd3f086059c789e95db6"} Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.116975 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fszdj" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.117056 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jlb56" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.127841 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.154769 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.155023 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnkdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lpkw9_openshift-marketplace(7521412f-3363-4617-9740-9dd9124df38e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:38:30 crc kubenswrapper[4913]: E0121 06:38:30.156317 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lpkw9" podUID="7521412f-3363-4617-9740-9dd9124df38e" Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.165300 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.16528359 podStartE2EDuration="10.16528359s" podCreationTimestamp="2026-01-21 06:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:38:30.147028061 +0000 UTC m=+199.943387754" watchObservedRunningTime="2026-01-21 06:38:30.16528359 +0000 UTC m=+199.961643263" Jan 21 06:38:30 crc kubenswrapper[4913]: I0121 06:38:30.181447 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=14.181423103 podStartE2EDuration="14.181423103s" podCreationTimestamp="2026-01-21 06:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:38:30.17404027 +0000 UTC m=+199.970399933" watchObservedRunningTime="2026-01-21 06:38:30.181423103 +0000 UTC m=+199.977782776" Jan 21 06:38:31 crc kubenswrapper[4913]: I0121 06:38:31.124550 4913 generic.go:334] "Generic (PLEG): container finished" podID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerID="71ed111272da72dc9748856e43d4d8004c600de64ea2bf42ebc31c9d37d07f49" exitCode=0 Jan 21 06:38:31 crc kubenswrapper[4913]: I0121 06:38:31.124803 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerDied","Data":"71ed111272da72dc9748856e43d4d8004c600de64ea2bf42ebc31c9d37d07f49"} Jan 21 06:38:31 crc kubenswrapper[4913]: E0121 06:38:31.129435 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lpkw9" podUID="7521412f-3363-4617-9740-9dd9124df38e" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.382961 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.446510 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") pod \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.446664 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") pod \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\" (UID: \"d389cec5-c315-4a24-92fd-d5ed381b3b5f\") " Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.448705 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d389cec5-c315-4a24-92fd-d5ed381b3b5f" (UID: "d389cec5-c315-4a24-92fd-d5ed381b3b5f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.452675 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d389cec5-c315-4a24-92fd-d5ed381b3b5f" (UID: "d389cec5-c315-4a24-92fd-d5ed381b3b5f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.547945 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:32 crc kubenswrapper[4913]: I0121 06:38:32.547984 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d389cec5-c315-4a24-92fd-d5ed381b3b5f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:33 crc kubenswrapper[4913]: I0121 06:38:33.136451 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d389cec5-c315-4a24-92fd-d5ed381b3b5f","Type":"ContainerDied","Data":"f6d03d89bb40acdb0afb436d8c01cd40d38c5fb443dddd3f086059c789e95db6"} Jan 21 06:38:33 crc kubenswrapper[4913]: I0121 06:38:33.136854 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d03d89bb40acdb0afb436d8c01cd40d38c5fb443dddd3f086059c789e95db6" Jan 21 06:38:33 crc kubenswrapper[4913]: I0121 06:38:33.136529 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 06:38:37 crc kubenswrapper[4913]: I0121 06:38:37.157403 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerStarted","Data":"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4"} Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.164322 4913 generic.go:334] "Generic (PLEG): container finished" podID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" exitCode=0 Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.164358 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4"} Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.318858 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.319013 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.319066 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.320512 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:38:38 crc kubenswrapper[4913]: I0121 06:38:38.320813 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355" gracePeriod=600 Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.172297 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerStarted","Data":"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655"} Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.174547 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355" exitCode=0 Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.174653 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355"} Jan 21 06:38:39 crc kubenswrapper[4913]: I0121 06:38:39.174682 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3"} Jan 21 06:38:40 crc kubenswrapper[4913]: I0121 06:38:40.183209 4913 generic.go:334] "Generic (PLEG): container finished" podID="f2b20a33-f426-426f-9657-3d11d403629f" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" exitCode=0 Jan 21 06:38:40 crc kubenswrapper[4913]: I0121 06:38:40.183281 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033"} Jan 21 06:38:40 crc kubenswrapper[4913]: I0121 06:38:40.205885 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rm75l" podStartSLOduration=4.038868056 podStartE2EDuration="1m6.205860447s" podCreationTimestamp="2026-01-21 06:37:34 +0000 UTC" firstStartedPulling="2026-01-21 06:37:36.53398838 +0000 UTC m=+146.330348053" lastFinishedPulling="2026-01-21 06:38:38.700980771 +0000 UTC m=+208.497340444" observedRunningTime="2026-01-21 06:38:40.201343734 +0000 UTC m=+209.997703407" watchObservedRunningTime="2026-01-21 06:38:40.205860447 +0000 UTC m=+210.002220160" Jan 21 06:38:41 crc kubenswrapper[4913]: I0121 06:38:41.193980 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerStarted","Data":"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108"} Jan 21 06:38:41 crc kubenswrapper[4913]: I0121 06:38:41.210560 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mvlq6" podStartSLOduration=2.980940076 podStartE2EDuration="1m8.210542955s" podCreationTimestamp="2026-01-21 06:37:33 +0000 UTC" firstStartedPulling="2026-01-21 06:37:35.506883181 +0000 UTC m=+145.303242854" lastFinishedPulling="2026-01-21 06:38:40.73648606 +0000 UTC m=+210.532845733" observedRunningTime="2026-01-21 06:38:41.209737422 +0000 UTC m=+211.006097095" watchObservedRunningTime="2026-01-21 06:38:41.210542955 +0000 UTC m=+211.006902618" Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.201212 4913 generic.go:334] "Generic (PLEG): container finished" podID="d976374c-9adc-426a-9593-43e617e72281" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" exitCode=0 Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.201280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be"} Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.203501 4913 generic.go:334] "Generic (PLEG): container finished" podID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" exitCode=0 Jan 21 06:38:42 crc kubenswrapper[4913]: I0121 06:38:42.203539 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.208948 4913 generic.go:334] "Generic (PLEG): container finished" podID="7521412f-3363-4617-9740-9dd9124df38e" containerID="7df421d769e0135e3fb9a32354b2ade06ec971399dd6c8201985258f4e4a34b1" exitCode=0 Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.209010 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"7df421d769e0135e3fb9a32354b2ade06ec971399dd6c8201985258f4e4a34b1"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.211245 4913 generic.go:334] "Generic (PLEG): container finished" podID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerID="a0c217ee06e4d0effa4b06e0042da74da4b4c664dbe6ca8ae4a8f377c3e40172" exitCode=0 Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.211313 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"a0c217ee06e4d0effa4b06e0042da74da4b4c664dbe6ca8ae4a8f377c3e40172"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.215466 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerStarted","Data":"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.220198 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerStarted","Data":"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9"} Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.246709 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffbwk" podStartSLOduration=2.968081665 podStartE2EDuration="1m10.246690901s" podCreationTimestamp="2026-01-21 06:37:33 +0000 UTC" firstStartedPulling="2026-01-21 06:37:35.5060644 +0000 UTC m=+145.302424073" lastFinishedPulling="2026-01-21 06:38:42.784673636 +0000 UTC m=+212.581033309" observedRunningTime="2026-01-21 06:38:43.244479351 +0000 UTC m=+213.040839014" watchObservedRunningTime="2026-01-21 06:38:43.246690901 +0000 UTC m=+213.043050584" Jan 21 06:38:43 crc kubenswrapper[4913]: I0121 06:38:43.275750 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpc4m" podStartSLOduration=2.197654708 podStartE2EDuration="1m6.275678865s" podCreationTimestamp="2026-01-21 06:37:37 +0000 UTC" firstStartedPulling="2026-01-21 06:37:38.658338788 +0000 UTC m=+148.454698461" lastFinishedPulling="2026-01-21 06:38:42.736362945 +0000 UTC m=+212.532722618" observedRunningTime="2026-01-21 06:38:43.27294137 +0000 UTC m=+213.069301033" watchObservedRunningTime="2026-01-21 06:38:43.275678865 +0000 UTC m=+213.072038538" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.119948 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.120322 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.191870 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.233562 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerStarted","Data":"374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11"} Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.236310 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerStarted","Data":"b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb"} Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.257135 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lpkw9" podStartSLOduration=2.283965613 podStartE2EDuration="1m8.257114616s" podCreationTimestamp="2026-01-21 06:37:36 +0000 UTC" firstStartedPulling="2026-01-21 06:37:37.619692302 +0000 UTC m=+147.416051975" lastFinishedPulling="2026-01-21 06:38:43.592841305 +0000 UTC m=+213.389200978" observedRunningTime="2026-01-21 06:38:44.255194413 +0000 UTC m=+214.051554086" watchObservedRunningTime="2026-01-21 06:38:44.257114616 +0000 UTC m=+214.053474289" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.283697 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkwc2" podStartSLOduration=3.4093494619999998 podStartE2EDuration="1m7.283673783s" podCreationTimestamp="2026-01-21 06:37:37 +0000 UTC" firstStartedPulling="2026-01-21 06:37:39.697644273 +0000 UTC m=+149.494003946" lastFinishedPulling="2026-01-21 06:38:43.571968594 +0000 UTC m=+213.368328267" observedRunningTime="2026-01-21 06:38:44.279990722 +0000 UTC m=+214.076350395" watchObservedRunningTime="2026-01-21 06:38:44.283673783 +0000 UTC m=+214.080033466" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.314085 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.314135 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.521124 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.521176 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:44 crc kubenswrapper[4913]: I0121 06:38:44.556078 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.242189 4913 generic.go:334] "Generic (PLEG): container finished" podID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" exitCode=0 Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.242276 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc"} Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.245224 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerStarted","Data":"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf"} Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.308822 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:45 crc kubenswrapper[4913]: I0121 06:38:45.350793 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" probeResult="failure" output=< Jan 21 06:38:45 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:38:45 crc kubenswrapper[4913]: > Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.198667 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.250319 4913 generic.go:334] "Generic (PLEG): container finished" podID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" exitCode=0 Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.250373 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf"} Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.465953 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.465999 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:46 crc kubenswrapper[4913]: I0121 06:38:46.544360 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.255638 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rm75l" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" containerID="cri-o://f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" gracePeriod=2 Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.472862 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.472924 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.921530 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:47 crc kubenswrapper[4913]: I0121 06:38:47.921911 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:48 crc kubenswrapper[4913]: I0121 06:38:48.513069 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" probeResult="failure" output=< Jan 21 06:38:48 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:38:48 crc kubenswrapper[4913]: > Jan 21 06:38:48 crc kubenswrapper[4913]: I0121 06:38:48.959114 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" probeResult="failure" output=< Jan 21 06:38:48 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:38:48 crc kubenswrapper[4913]: > Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.144635 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.165225 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") pod \"be61dd34-8d4d-4525-8187-3c21f22cd88a\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.165279 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") pod \"be61dd34-8d4d-4525-8187-3c21f22cd88a\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.165331 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") pod \"be61dd34-8d4d-4525-8187-3c21f22cd88a\" (UID: \"be61dd34-8d4d-4525-8187-3c21f22cd88a\") " Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.166769 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities" (OuterVolumeSpecName: "utilities") pod "be61dd34-8d4d-4525-8187-3c21f22cd88a" (UID: "be61dd34-8d4d-4525-8187-3c21f22cd88a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.189117 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz" (OuterVolumeSpecName: "kube-api-access-n9dfz") pod "be61dd34-8d4d-4525-8187-3c21f22cd88a" (UID: "be61dd34-8d4d-4525-8187-3c21f22cd88a"). InnerVolumeSpecName "kube-api-access-n9dfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.267092 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.267125 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9dfz\" (UniqueName: \"kubernetes.io/projected/be61dd34-8d4d-4525-8187-3c21f22cd88a-kube-api-access-n9dfz\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269835 4913 generic.go:334] "Generic (PLEG): container finished" podID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" exitCode=0 Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269890 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655"} Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269923 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rm75l" event={"ID":"be61dd34-8d4d-4525-8187-3c21f22cd88a","Type":"ContainerDied","Data":"2b2da556d8d5ceb79d9f0ad50be41dd604bef4e604d018fead743630456fc287"} Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.269944 4913 scope.go:117] "RemoveContainer" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.270086 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rm75l" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.283353 4913 scope.go:117] "RemoveContainer" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.300651 4913 scope.go:117] "RemoveContainer" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.328785 4913 scope.go:117] "RemoveContainer" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" Jan 21 06:38:49 crc kubenswrapper[4913]: E0121 06:38:49.329236 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655\": container with ID starting with f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655 not found: ID does not exist" containerID="f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329274 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655"} err="failed to get container status \"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655\": rpc error: code = NotFound desc = could not find container \"f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655\": container with ID starting with f6d2b5b018711f772cc5a523d12b0a37e5ceaca11a75717ae9da1d86813ea655 not found: ID does not exist" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329302 4913 scope.go:117] "RemoveContainer" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" Jan 21 06:38:49 crc kubenswrapper[4913]: E0121 06:38:49.329790 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4\": container with ID starting with 9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4 not found: ID does not exist" containerID="9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329824 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4"} err="failed to get container status \"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4\": rpc error: code = NotFound desc = could not find container \"9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4\": container with ID starting with 9931f244aadc323a541ac9bd7072f976caed4bad98ba26f7f3ebbf5e60c47ab4 not found: ID does not exist" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.329842 4913 scope.go:117] "RemoveContainer" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" Jan 21 06:38:49 crc kubenswrapper[4913]: E0121 06:38:49.330116 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea\": container with ID starting with ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea not found: ID does not exist" containerID="ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.330142 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea"} err="failed to get container status \"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea\": rpc error: code = NotFound desc = could not find container \"ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea\": container with ID starting with ab61ce6e3a9c80df21453ea1d9794302f612eb0d27dcfb93d85cb5ace99a55ea not found: ID does not exist" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.693168 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be61dd34-8d4d-4525-8187-3c21f22cd88a" (UID: "be61dd34-8d4d-4525-8187-3c21f22cd88a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.772165 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be61dd34-8d4d-4525-8187-3c21f22cd88a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.895238 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:38:49 crc kubenswrapper[4913]: I0121 06:38:49.897521 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rm75l"] Jan 21 06:38:50 crc kubenswrapper[4913]: I0121 06:38:50.534538 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" path="/var/lib/kubelet/pods/be61dd34-8d4d-4525-8187-3c21f22cd88a/volumes" Jan 21 06:38:54 crc kubenswrapper[4913]: I0121 06:38:54.184097 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:38:54 crc kubenswrapper[4913]: I0121 06:38:54.351913 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:54 crc kubenswrapper[4913]: I0121 06:38:54.388002 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:38:56 crc kubenswrapper[4913]: I0121 06:38:56.534518 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:38:56 crc kubenswrapper[4913]: I0121 06:38:56.605357 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:38:56 crc kubenswrapper[4913]: I0121 06:38:56.956277 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lpkw9" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" containerID="cri-o://374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11" gracePeriod=2 Jan 21 06:38:57 crc kubenswrapper[4913]: I0121 06:38:57.552648 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:57 crc kubenswrapper[4913]: I0121 06:38:57.611919 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:38:57 crc kubenswrapper[4913]: I0121 06:38:57.980498 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.040901 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.970617 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerStarted","Data":"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c"} Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.973677 4913 generic.go:334] "Generic (PLEG): container finished" podID="7521412f-3363-4617-9740-9dd9124df38e" containerID="374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11" exitCode=0 Jan 21 06:38:58 crc kubenswrapper[4913]: I0121 06:38:58.973728 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11"} Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.003505 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jlb56" podStartSLOduration=7.918960463 podStartE2EDuration="1m25.003483633s" podCreationTimestamp="2026-01-21 06:37:35 +0000 UTC" firstStartedPulling="2026-01-21 06:37:37.623935655 +0000 UTC m=+147.420295328" lastFinishedPulling="2026-01-21 06:38:54.708458835 +0000 UTC m=+224.504818498" observedRunningTime="2026-01-21 06:39:00.001468207 +0000 UTC m=+229.797827880" watchObservedRunningTime="2026-01-21 06:39:00.003483633 +0000 UTC m=+229.799843306" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.238960 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.416505 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") pod \"7521412f-3363-4617-9740-9dd9124df38e\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.416726 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") pod \"7521412f-3363-4617-9740-9dd9124df38e\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.416823 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") pod \"7521412f-3363-4617-9740-9dd9124df38e\" (UID: \"7521412f-3363-4617-9740-9dd9124df38e\") " Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.417982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities" (OuterVolumeSpecName: "utilities") pod "7521412f-3363-4617-9740-9dd9124df38e" (UID: "7521412f-3363-4617-9740-9dd9124df38e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.422496 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv" (OuterVolumeSpecName: "kube-api-access-dnkdv") pod "7521412f-3363-4617-9740-9dd9124df38e" (UID: "7521412f-3363-4617-9740-9dd9124df38e"). InnerVolumeSpecName "kube-api-access-dnkdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.458288 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7521412f-3363-4617-9740-9dd9124df38e" (UID: "7521412f-3363-4617-9740-9dd9124df38e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.517801 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.517845 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnkdv\" (UniqueName: \"kubernetes.io/projected/7521412f-3363-4617-9740-9dd9124df38e-kube-api-access-dnkdv\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.517857 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7521412f-3363-4617-9740-9dd9124df38e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.797886 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.798186 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkwc2" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" containerID="cri-o://b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb" gracePeriod=2 Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.999112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpkw9" event={"ID":"7521412f-3363-4617-9740-9dd9124df38e","Type":"ContainerDied","Data":"4ca8f643221ff5fc21b50ab0e5b3cc24caa324b6eb82b83b209faf09771f9015"} Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.999226 4913 scope.go:117] "RemoveContainer" containerID="374c9257cde812fb068c323490cd73278adfe48a94284e17a0ade3e0c70e7c11" Jan 21 06:39:00 crc kubenswrapper[4913]: I0121 06:39:00.999415 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpkw9" Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.027237 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.028774 4913 scope.go:117] "RemoveContainer" containerID="7df421d769e0135e3fb9a32354b2ade06ec971399dd6c8201985258f4e4a34b1" Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.031967 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpkw9"] Jan 21 06:39:01 crc kubenswrapper[4913]: I0121 06:39:01.055586 4913 scope.go:117] "RemoveContainer" containerID="93e15fb5b03e79a08467b762e78a24c070dcc8c24e8f33b03e16ab6662aedb40" Jan 21 06:39:02 crc kubenswrapper[4913]: I0121 06:39:02.009289 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerStarted","Data":"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5"} Jan 21 06:39:02 crc kubenswrapper[4913]: I0121 06:39:02.536493 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7521412f-3363-4617-9740-9dd9124df38e" path="/var/lib/kubelet/pods/7521412f-3363-4617-9740-9dd9124df38e/volumes" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.020195 4913 generic.go:334] "Generic (PLEG): container finished" podID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerID="b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb" exitCode=0 Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.020295 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb"} Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.045768 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fszdj" podStartSLOduration=4.807489562 podStartE2EDuration="1m29.045744556s" podCreationTimestamp="2026-01-21 06:37:34 +0000 UTC" firstStartedPulling="2026-01-21 06:37:35.511663229 +0000 UTC m=+145.308022902" lastFinishedPulling="2026-01-21 06:38:59.749918213 +0000 UTC m=+229.546277896" observedRunningTime="2026-01-21 06:39:03.044700797 +0000 UTC m=+232.841060490" watchObservedRunningTime="2026-01-21 06:39:03.045744556 +0000 UTC m=+232.842104249" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.338040 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.458088 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") pod \"14e729d1-3cb1-49d7-b34f-d997333ec65f\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.458251 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") pod \"14e729d1-3cb1-49d7-b34f-d997333ec65f\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.458308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") pod \"14e729d1-3cb1-49d7-b34f-d997333ec65f\" (UID: \"14e729d1-3cb1-49d7-b34f-d997333ec65f\") " Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.459356 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities" (OuterVolumeSpecName: "utilities") pod "14e729d1-3cb1-49d7-b34f-d997333ec65f" (UID: "14e729d1-3cb1-49d7-b34f-d997333ec65f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.462915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg" (OuterVolumeSpecName: "kube-api-access-5rcfg") pod "14e729d1-3cb1-49d7-b34f-d997333ec65f" (UID: "14e729d1-3cb1-49d7-b34f-d997333ec65f"). InnerVolumeSpecName "kube-api-access-5rcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.559138 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.559171 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rcfg\" (UniqueName: \"kubernetes.io/projected/14e729d1-3cb1-49d7-b34f-d997333ec65f-kube-api-access-5rcfg\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.574373 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14e729d1-3cb1-49d7-b34f-d997333ec65f" (UID: "14e729d1-3cb1-49d7-b34f-d997333ec65f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:39:03 crc kubenswrapper[4913]: I0121 06:39:03.660777 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14e729d1-3cb1-49d7-b34f-d997333ec65f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.028406 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwc2" event={"ID":"14e729d1-3cb1-49d7-b34f-d997333ec65f","Type":"ContainerDied","Data":"8f5064bd05054c2b02632229ded6fedcb4045b72cf1e85d3555133283a45b0c3"} Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.028469 4913 scope.go:117] "RemoveContainer" containerID="b9d441556daa0971bf71f920fb3f706db223927472ed6e26fcdf0552430912fb" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.028533 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwc2" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.049939 4913 scope.go:117] "RemoveContainer" containerID="a0c217ee06e4d0effa4b06e0042da74da4b4c664dbe6ca8ae4a8f377c3e40172" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.079633 4913 scope.go:117] "RemoveContainer" containerID="4536887c323df45fbc4166635e0604a06736c4d0fb3091dd1489a3822a0f1cf4" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.095320 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.098823 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkwc2"] Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.534694 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" path="/var/lib/kubelet/pods/14e729d1-3cb1-49d7-b34f-d997333ec65f/volumes" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.712506 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.712574 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:04 crc kubenswrapper[4913]: I0121 06:39:04.756414 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:06 crc kubenswrapper[4913]: I0121 06:39:06.071093 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:06 crc kubenswrapper[4913]: I0121 06:39:06.071132 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:06 crc kubenswrapper[4913]: I0121 06:39:06.115367 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.104671 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.456474 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.692737 4913 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693019 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693097 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693131 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693028 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.693181 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" gracePeriod=15 Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699240 4913 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699694 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699711 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699724 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerName="pruner" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699732 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerName="pruner" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699740 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699898 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699934 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699941 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699950 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699957 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699964 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.699969 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.699979 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700004 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700013 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700019 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700028 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700036 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700047 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700052 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700062 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700088 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700095 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700100 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700108 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700114 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700120 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700126 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="extract-content" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700132 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700138 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700173 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700180 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:39:07 crc kubenswrapper[4913]: E0121 06:39:07.700187 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700193 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="extract-utilities" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700326 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700339 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e729d1-3cb1-49d7-b34f-d997333ec65f" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700346 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700354 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="be61dd34-8d4d-4525-8187-3c21f22cd88a" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700361 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="d389cec5-c315-4a24-92fd-d5ed381b3b5f" containerName="pruner" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700370 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700376 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700384 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="7521412f-3363-4617-9740-9dd9124df38e" containerName="registry-server" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700409 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.700418 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.701899 4913 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.702307 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.705899 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.733377 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.813509 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.813949 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814100 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814231 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814346 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814372 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814412 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.814431 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915808 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915853 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915896 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915920 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915935 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915965 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915970 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915990 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915999 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916033 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.915993 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916011 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916015 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916080 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916156 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:07 crc kubenswrapper[4913]: I0121 06:39:07.916205 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.030853 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:08 crc kubenswrapper[4913]: W0121 06:39:08.046493 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f WatchSource:0}: Error finding container 673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f: Status 404 returned error can't find the container with id 673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f Jan 21 06:39:08 crc kubenswrapper[4913]: E0121 06:39:08.049250 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cabb51f14d6c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,LastTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.059655 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerID="0b206ec76a91256c0c91606cbe0925f94e7fbd4e7b6b747641a151b3beb320e9" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.059778 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerDied","Data":"0b206ec76a91256c0c91606cbe0925f94e7fbd4e7b6b747641a151b3beb320e9"} Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.060388 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.060657 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.062109 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063276 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063919 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063943 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063951 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" exitCode=0 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063958 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" exitCode=2 Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.063992 4913 scope.go:117] "RemoveContainer" containerID="52b4a5b43c0b1bcdb31092566e21a913e4f8ba1e9e5027c730e5e1e31b7267a5" Jan 21 06:39:08 crc kubenswrapper[4913]: I0121 06:39:08.065034 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f"} Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.077093 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.081420 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb"} Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.082504 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.084245 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.312194 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.312669 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.313033 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435254 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") pod \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435317 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" (UID: "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435329 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") pod \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435376 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") pod \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\" (UID: \"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f\") " Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435395 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock" (OuterVolumeSpecName: "var-lock") pod "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" (UID: "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435769 4913 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.435780 4913 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.441940 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" (UID: "6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:09 crc kubenswrapper[4913]: I0121 06:39:09.536791 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.088457 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.088477 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f","Type":"ContainerDied","Data":"5cfc103b743cf4cd9f52146d725a2c25d6e49ba42c9012d9ddde5cfdedf47ef3"} Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.089006 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cfc103b743cf4cd9f52146d725a2c25d6e49ba42c9012d9ddde5cfdedf47ef3" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.141176 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.141708 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.145995 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.146806 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.147355 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.147791 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.148354 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243499 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243567 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243637 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243940 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.243982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.244001 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.344833 4913 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.344891 4913 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.344906 4913 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.530240 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.530685 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.531102 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:10 crc kubenswrapper[4913]: I0121 06:39:10.534450 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.065963 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cabb51f14d6c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,LastTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.100472 4913 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" exitCode=0 Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.100528 4913 scope.go:117] "RemoveContainer" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.100695 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.101723 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.102180 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.103036 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.105188 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.106253 4913 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.106492 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.123231 4913 scope.go:117] "RemoveContainer" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.137969 4913 scope.go:117] "RemoveContainer" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.157519 4913 scope.go:117] "RemoveContainer" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.173184 4913 scope.go:117] "RemoveContainer" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.191469 4913 scope.go:117] "RemoveContainer" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.217152 4913 scope.go:117] "RemoveContainer" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.218108 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\": container with ID starting with 4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31 not found: ID does not exist" containerID="4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218163 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31"} err="failed to get container status \"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\": rpc error: code = NotFound desc = could not find container \"4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31\": container with ID starting with 4e426c525651d1051c5dabd742f2f5ef915e43cb976cbde4c33f78b7e1997d31 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218185 4913 scope.go:117] "RemoveContainer" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.218830 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\": container with ID starting with 5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32 not found: ID does not exist" containerID="5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218924 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32"} err="failed to get container status \"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\": rpc error: code = NotFound desc = could not find container \"5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32\": container with ID starting with 5671e963215994ec5aedfcbe0d63144fc8a4289337c45d47adc42bfcc45cbd32 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.218969 4913 scope.go:117] "RemoveContainer" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.219428 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\": container with ID starting with 75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019 not found: ID does not exist" containerID="75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.219483 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019"} err="failed to get container status \"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\": rpc error: code = NotFound desc = could not find container \"75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019\": container with ID starting with 75cf4ac90fe13c54f09d013a907ad6d86dc2bf7d54a7bd09dd1d8bbbed113019 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.219529 4913 scope.go:117] "RemoveContainer" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.220167 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\": container with ID starting with c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4 not found: ID does not exist" containerID="c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220198 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4"} err="failed to get container status \"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\": rpc error: code = NotFound desc = could not find container \"c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4\": container with ID starting with c47b54c11fc6da1c5d18e87bf0e43922dd59f82b3c518f183bd89e8574c61ab4 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220217 4913 scope.go:117] "RemoveContainer" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.220555 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\": container with ID starting with c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5 not found: ID does not exist" containerID="c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220575 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5"} err="failed to get container status \"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\": rpc error: code = NotFound desc = could not find container \"c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5\": container with ID starting with c227844286983a31e3c469703377ad79e15b7eff0a77f9df8192b6071f9875e5 not found: ID does not exist" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.220604 4913 scope.go:117] "RemoveContainer" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" Jan 21 06:39:11 crc kubenswrapper[4913]: E0121 06:39:11.220986 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\": container with ID starting with 6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204 not found: ID does not exist" containerID="6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204" Jan 21 06:39:11 crc kubenswrapper[4913]: I0121 06:39:11.221024 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204"} err="failed to get container status \"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\": rpc error: code = NotFound desc = could not find container \"6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204\": container with ID starting with 6a95ce1ea9b688c870e5c8ad2a426e5612571d96e6795d2a073fb2cc318b6204 not found: ID does not exist" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.769117 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.769778 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.770362 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:14 crc kubenswrapper[4913]: I0121 06:39:14.771065 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.522562 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.523176 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.523743 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.524204 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.524642 4913 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:17 crc kubenswrapper[4913]: I0121 06:39:17.524704 4913 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.525147 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Jan 21 06:39:17 crc kubenswrapper[4913]: E0121 06:39:17.726334 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Jan 21 06:39:18 crc kubenswrapper[4913]: E0121 06:39:18.127606 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Jan 21 06:39:18 crc kubenswrapper[4913]: E0121 06:39:18.562124 4913 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" volumeName="registry-storage" Jan 21 06:39:18 crc kubenswrapper[4913]: E0121 06:39:18.929325 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Jan 21 06:39:20 crc kubenswrapper[4913]: E0121 06:39:20.530272 4913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Jan 21 06:39:20 crc kubenswrapper[4913]: I0121 06:39:20.532636 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:20 crc kubenswrapper[4913]: I0121 06:39:20.533361 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:20 crc kubenswrapper[4913]: I0121 06:39:20.534052 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:21 crc kubenswrapper[4913]: E0121 06:39:21.067151 4913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cabb51f14d6c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,LastTimestamp:2026-01-21 06:39:08.048651969 +0000 UTC m=+237.845011642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.176713 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.176810 4913 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727" exitCode=1 Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.176858 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727"} Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.177631 4913 scope.go:117] "RemoveContainer" containerID="ff2974572e094cd5c521c0a1c1997950d624600f0fa4d5da1fbbd9a0f6b4b727" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.178292 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.179032 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.179741 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.180738 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.527102 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.528436 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.529103 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.529461 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.529885 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.549984 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.550026 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:22 crc kubenswrapper[4913]: E0121 06:39:22.550635 4913 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:22 crc kubenswrapper[4913]: I0121 06:39:22.551292 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:22 crc kubenswrapper[4913]: W0121 06:39:22.580580 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb WatchSource:0}: Error finding container 2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb: Status 404 returned error can't find the container with id 2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.189483 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.189875 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c52c9ebf8626ba7e0921f8f4b7b3291277c38fee2ef91fe31b90732a25a7f81d"} Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.191404 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192120 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192708 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192932 4913 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e7036e262d91ff61126733fa5d492914b16196efae02cae4e0e6c5c5e13f0ac4" exitCode=0 Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.192981 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e7036e262d91ff61126733fa5d492914b16196efae02cae4e0e6c5c5e13f0ac4"} Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2b72d50c2aef62572d7cd2b2bfa5d0ca8bb62cab8b2d5d908382c77a3e6be9fb"} Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193196 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193305 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193320 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:23 crc kubenswrapper[4913]: E0121 06:39:23.193768 4913 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.193864 4913 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.197078 4913 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.197518 4913 status_manager.go:851] "Failed to get status for pod" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" pod="openshift-marketplace/certified-operators-fszdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-fszdj\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:23 crc kubenswrapper[4913]: I0121 06:39:23.197867 4913 status_manager.go:851] "Failed to get status for pod" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.006205 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.014667 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201209 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c30d3d80bed8eb641fa894e2540ecb55078e3943f4489d95ae6649b34503b551"} Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201294 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2c9cf8f51a1594bb24855ce8083da2e718374c343be191f79305151b45f4e85"} Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201336 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cfc26a938aee341d29090998cf1db6d0a34b50f4f394028cc14a30369ab4b858"} Jan 21 06:39:24 crc kubenswrapper[4913]: I0121 06:39:24.201709 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210292 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6828f539998b6fca1d81781efa2153f2b4b210388de3575e0d28a396c2edd6f7"} Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210637 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f221e52caedca5d4d27564ec829e9a3215d870a203bb9a8724281a92a530e5e"} Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210641 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:25 crc kubenswrapper[4913]: I0121 06:39:25.210669 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:27 crc kubenswrapper[4913]: I0121 06:39:27.552066 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:27 crc kubenswrapper[4913]: I0121 06:39:27.553103 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:27 crc kubenswrapper[4913]: I0121 06:39:27.562999 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:30 crc kubenswrapper[4913]: I0121 06:39:30.228922 4913 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:30 crc kubenswrapper[4913]: I0121 06:39:30.544485 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.250294 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.250335 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.250336 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.254183 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.257531 4913 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://cfc26a938aee341d29090998cf1db6d0a34b50f4f394028cc14a30369ab4b858" Jan 21 06:39:31 crc kubenswrapper[4913]: I0121 06:39:31.257573 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.255406 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.255433 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.259232 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.483454 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" containerID="cri-o://62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" gracePeriod=15 Jan 21 06:39:32 crc kubenswrapper[4913]: I0121 06:39:32.921575 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041210 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041291 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041340 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041399 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041441 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041480 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041516 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041549 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041582 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041639 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041675 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041723 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041763 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.041797 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") pod \"bd2a9afe-21be-43e4-970d-03daff0713a1\" (UID: \"bd2a9afe-21be-43e4-970d-03daff0713a1\") " Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.042875 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.042908 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.042990 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.044458 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.045457 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.048439 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.048856 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.049493 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.050113 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.050255 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.050478 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.051293 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr" (OuterVolumeSpecName: "kube-api-access-2d6jr") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "kube-api-access-2d6jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.052380 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.053956 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bd2a9afe-21be-43e4-970d-03daff0713a1" (UID: "bd2a9afe-21be-43e4-970d-03daff0713a1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143794 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143898 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143917 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143931 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143966 4913 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143977 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143989 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.143998 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144008 4913 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd2a9afe-21be-43e4-970d-03daff0713a1-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144018 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144032 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144042 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144051 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d6jr\" (UniqueName: \"kubernetes.io/projected/bd2a9afe-21be-43e4-970d-03daff0713a1-kube-api-access-2d6jr\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.144061 4913 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bd2a9afe-21be-43e4-970d-03daff0713a1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.265769 4913 generic.go:334] "Generic (PLEG): container finished" podID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" exitCode=0 Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267146 4913 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267173 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f45b52c3-5a8a-4d2d-864d-059884213e59" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.265884 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.265915 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerDied","Data":"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065"} Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267783 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-b6p62" event={"ID":"bd2a9afe-21be-43e4-970d-03daff0713a1","Type":"ContainerDied","Data":"e3b9231ae3a56871d91beb2ec7c695d6151fa1fa2ff2296779fd683ef36161ab"} Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.267829 4913 scope.go:117] "RemoveContainer" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.272815 4913 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8e606b3c-12de-490a-a076-7ff0b95a072d" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.324782 4913 scope.go:117] "RemoveContainer" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" Jan 21 06:39:33 crc kubenswrapper[4913]: E0121 06:39:33.325484 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065\": container with ID starting with 62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065 not found: ID does not exist" containerID="62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065" Jan 21 06:39:33 crc kubenswrapper[4913]: I0121 06:39:33.325535 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065"} err="failed to get container status \"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065\": rpc error: code = NotFound desc = could not find container \"62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065\": container with ID starting with 62f40d81992c99b63597a7f81507cf32c1ff8624ceeb0892ce6dc2ff2ae57065 not found: ID does not exist" Jan 21 06:39:38 crc kubenswrapper[4913]: I0121 06:39:38.341366 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.237114 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.434256 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.759644 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 06:39:40 crc kubenswrapper[4913]: I0121 06:39:40.853056 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 06:39:41 crc kubenswrapper[4913]: I0121 06:39:41.443861 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 06:39:41 crc kubenswrapper[4913]: I0121 06:39:41.767167 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 06:39:41 crc kubenswrapper[4913]: I0121 06:39:41.997234 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.037899 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.038255 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.178025 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.307699 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.335485 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.339733 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.403691 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.749977 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.832063 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.836910 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 06:39:42 crc kubenswrapper[4913]: I0121 06:39:42.979139 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.078983 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.342656 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.365504 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.416005 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.512253 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.521534 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.530785 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.580079 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.628225 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.687047 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.768292 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.865448 4913 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.959350 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.978773 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 06:39:43 crc kubenswrapper[4913]: I0121 06:39:43.990904 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.244571 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.274412 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.333166 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.345985 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.350675 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.402711 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.428656 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.498540 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.504849 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.587621 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.603557 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.837184 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.879633 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 06:39:44 crc kubenswrapper[4913]: I0121 06:39:44.895150 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.028519 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.193585 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.215466 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.215518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.359585 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.388992 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.460296 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.503760 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.527309 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.581521 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.587487 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.618483 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.619580 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.640656 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.643389 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.651130 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.736582 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.768898 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.781064 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.860830 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.866518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.929122 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.956322 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 06:39:45 crc kubenswrapper[4913]: I0121 06:39:45.985568 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.094883 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.117096 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.141118 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.144177 4913 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.148706 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.148683982 podStartE2EDuration="39.148683982s" podCreationTimestamp="2026-01-21 06:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:39:30.130254733 +0000 UTC m=+259.926614426" watchObservedRunningTime="2026-01-21 06:39:46.148683982 +0000 UTC m=+275.945043695" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.152855 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-b6p62","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.152927 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.155380 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.160738 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.179385 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.179365984 podStartE2EDuration="16.179365984s" podCreationTimestamp="2026-01-21 06:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:39:46.176404202 +0000 UTC m=+275.972763885" watchObservedRunningTime="2026-01-21 06:39:46.179365984 +0000 UTC m=+275.975725667" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.336758 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.349324 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.425464 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.466969 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.525489 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.534392 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.534993 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" path="/var/lib/kubelet/pods/bd2a9afe-21be-43e4-970d-03daff0713a1/volumes" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.545443 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.595021 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.710559 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.733232 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.769310 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.797345 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.855970 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.883300 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.939131 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.947337 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.948631 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 06:39:46 crc kubenswrapper[4913]: I0121 06:39:46.958746 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.003907 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.076511 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.177235 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.178187 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.231049 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.306069 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.344698 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.442607 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.459251 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.587720 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.696918 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.777084 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.836429 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.882359 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.918250 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.955335 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 06:39:47 crc kubenswrapper[4913]: I0121 06:39:47.997487 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.002194 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.214478 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.230886 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.259417 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.325291 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.436427 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.458965 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.482947 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.540558 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.573788 4913 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.590932 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.649516 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.759840 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.793932 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.794143 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.864064 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.869177 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:39:48 crc kubenswrapper[4913]: I0121 06:39:48.987262 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.116843 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.228065 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.310676 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.356685 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.418556 4913 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.442651 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.467849 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.503138 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.628072 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.638772 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.661094 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.674430 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.866365 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 06:39:49 crc kubenswrapper[4913]: I0121 06:39:49.991905 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.079339 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.094469 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.110253 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.132683 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.260311 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.362380 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.390308 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.439179 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.452059 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.455341 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.499421 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.509465 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.522135 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.584646 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.707824 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.785028 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.787392 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.788925 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.844550 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.864614 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:39:50 crc kubenswrapper[4913]: I0121 06:39:50.999218 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.057959 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.270821 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.472014 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.493666 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.646875 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.675198 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.678836 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.727074 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.888517 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 06:39:51 crc kubenswrapper[4913]: I0121 06:39:51.960943 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.020462 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.051804 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.082943 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.124624 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.171477 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.174571 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.187521 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.223610 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.259709 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.372807 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.385222 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.458904 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.460708 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.536359 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.750696 4913 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.847483 4913 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.847849 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb" gracePeriod=5 Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.859036 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.888008 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 06:39:52 crc kubenswrapper[4913]: I0121 06:39:52.994760 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.018191 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.102538 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.179334 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.192015 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.241057 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.386318 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.389187 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.408116 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.409543 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.431643 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.498204 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.528042 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.583987 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.628397 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.632753 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.668275 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.753738 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.781268 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.800254 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 06:39:53 crc kubenswrapper[4913]: I0121 06:39:53.936076 4913 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.172822 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.250155 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.277414 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2"] Jan 21 06:39:54 crc kubenswrapper[4913]: E0121 06:39:54.277915 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.277967 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 06:39:54 crc kubenswrapper[4913]: E0121 06:39:54.278008 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerName="installer" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278025 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerName="installer" Jan 21 06:39:54 crc kubenswrapper[4913]: E0121 06:39:54.278052 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278068 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278336 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb6a93b-3fd7-4e3c-b96a-ed7499b8f22f" containerName="installer" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278377 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.278404 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2a9afe-21be-43e4-970d-03daff0713a1" containerName="oauth-openshift" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.279229 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.284742 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.284800 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.284954 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.285749 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.286465 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.289501 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.289504 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291625 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291636 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291830 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.291968 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.296290 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.317887 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.318823 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.331758 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.338431 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.364026 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2"] Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.402796 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440781 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440831 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-policies\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440889 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440924 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-dir\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440966 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.440985 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441107 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wqx\" (UniqueName: \"kubernetes.io/projected/40d3280d-72fa-4b13-ba3a-94dda976ad5f-kube-api-access-j9wqx\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441249 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441284 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441330 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441388 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441456 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441485 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.441520 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.453024 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.492052 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.504861 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543136 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wqx\" (UniqueName: \"kubernetes.io/projected/40d3280d-72fa-4b13-ba3a-94dda976ad5f-kube-api-access-j9wqx\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543209 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543237 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543268 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543291 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543321 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543350 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543395 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543425 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543470 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-policies\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543498 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543527 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-dir\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543621 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.543659 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.546383 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-dir\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.547241 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.547241 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-audit-policies\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.548100 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.557923 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.561856 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.563258 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.563313 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.563628 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.564129 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.564992 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.572457 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.573725 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40d3280d-72fa-4b13-ba3a-94dda976ad5f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.577020 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wqx\" (UniqueName: \"kubernetes.io/projected/40d3280d-72fa-4b13-ba3a-94dda976ad5f-kube-api-access-j9wqx\") pod \"oauth-openshift-7bccf64dbb-7x2z2\" (UID: \"40d3280d-72fa-4b13-ba3a-94dda976ad5f\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.578899 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.616287 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.808356 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2"] Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.808854 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.811734 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.873958 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.922182 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 06:39:54 crc kubenswrapper[4913]: I0121 06:39:54.965516 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.007578 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.017385 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.140485 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.426021 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" event={"ID":"40d3280d-72fa-4b13-ba3a-94dda976ad5f","Type":"ContainerStarted","Data":"67466a9b6b6f2ca145f3464f8a49e5dd3bdf5481e9245013f6450d262d3e774d"} Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.426094 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" event={"ID":"40d3280d-72fa-4b13-ba3a-94dda976ad5f","Type":"ContainerStarted","Data":"ecf0f62c5f69dda3b63f27250d4153ae2523756bfdacde05ab9250c9f6935dbe"} Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.427649 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.450526 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" podStartSLOduration=48.450508518 podStartE2EDuration="48.450508518s" podCreationTimestamp="2026-01-21 06:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:39:55.447523075 +0000 UTC m=+285.243882758" watchObservedRunningTime="2026-01-21 06:39:55.450508518 +0000 UTC m=+285.246868191" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.735377 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.814185 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bccf64dbb-7x2z2" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.828113 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:39:55 crc kubenswrapper[4913]: I0121 06:39:55.895123 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 06:39:56 crc kubenswrapper[4913]: I0121 06:39:56.113489 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 06:39:56 crc kubenswrapper[4913]: I0121 06:39:56.180858 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.445915 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.445994 4913 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb" exitCode=137 Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.446065 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="673ed5e82599713f25287509acadc7f349e017ad8e64f26ab2300b791e92722f" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.454391 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.454810 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.533972 4913 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.548878 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.548926 4913 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4a4f6234-7e9e-4216-83f8-6dd33d1298d2" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.550538 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.550578 4913 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4a4f6234-7e9e-4216-83f8-6dd33d1298d2" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602166 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602251 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602266 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602380 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602444 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602471 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602545 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.602554 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603298 4913 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603334 4913 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603353 4913 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.603370 4913 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.614131 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:39:58 crc kubenswrapper[4913]: I0121 06:39:58.704478 4913 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 06:39:59 crc kubenswrapper[4913]: I0121 06:39:59.452374 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 06:40:00 crc kubenswrapper[4913]: I0121 06:40:00.537443 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.431581 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.432668 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ffbwk" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" containerID="cri-o://791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.443641 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.443947 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fszdj" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" containerID="cri-o://b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.460669 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.461084 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mvlq6" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" containerID="cri-o://f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.482086 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.482571 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" containerID="cri-o://ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.491634 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.491869 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jlb56" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" containerID="cri-o://10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.496719 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mmmzm"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.497755 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.503439 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.504410 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpc4m" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" containerID="cri-o://f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" gracePeriod=30 Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.504673 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mmmzm"] Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.600240 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.600448 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xhv\" (UniqueName: \"kubernetes.io/projected/9850b956-f0a1-4e29-b5c2-703b0aa7b697-kube-api-access-m8xhv\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.600616 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.702085 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.702140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xhv\" (UniqueName: \"kubernetes.io/projected/9850b956-f0a1-4e29-b5c2-703b0aa7b697-kube-api-access-m8xhv\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.702175 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.703924 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.707670 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9850b956-f0a1-4e29-b5c2-703b0aa7b697-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.722906 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xhv\" (UniqueName: \"kubernetes.io/projected/9850b956-f0a1-4e29-b5c2-703b0aa7b697-kube-api-access-m8xhv\") pod \"marketplace-operator-79b997595-mmmzm\" (UID: \"9850b956-f0a1-4e29-b5c2-703b0aa7b697\") " pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.819171 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.952316 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.957611 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.965146 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.973014 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:40:05 crc kubenswrapper[4913]: I0121 06:40:05.974098 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112878 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") pod \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112926 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") pod \"2255f06f-74ad-4308-9575-c04f8c24d4d5\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112967 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") pod \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.112994 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") pod \"2255f06f-74ad-4308-9575-c04f8c24d4d5\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113029 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") pod \"f2b20a33-f426-426f-9657-3d11d403629f\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113059 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") pod \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113093 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") pod \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") pod \"d976374c-9adc-426a-9593-43e617e72281\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113143 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") pod \"f2b20a33-f426-426f-9657-3d11d403629f\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113186 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") pod \"d976374c-9adc-426a-9593-43e617e72281\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113215 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") pod \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\" (UID: \"b5a378fe-18a6-4be0-8d56-eaddc377bd8b\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113243 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") pod \"f2b20a33-f426-426f-9657-3d11d403629f\" (UID: \"f2b20a33-f426-426f-9657-3d11d403629f\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113268 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") pod \"d976374c-9adc-426a-9593-43e617e72281\" (UID: \"d976374c-9adc-426a-9593-43e617e72281\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113290 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") pod \"2255f06f-74ad-4308-9575-c04f8c24d4d5\" (UID: \"2255f06f-74ad-4308-9575-c04f8c24d4d5\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113315 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") pod \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\" (UID: \"f3e3e7a7-a59e-4d12-8499-38ad4a72832d\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113747 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f3e3e7a7-a59e-4d12-8499-38ad4a72832d" (UID: "f3e3e7a7-a59e-4d12-8499-38ad4a72832d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.113903 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities" (OuterVolumeSpecName: "utilities") pod "2255f06f-74ad-4308-9575-c04f8c24d4d5" (UID: "2255f06f-74ad-4308-9575-c04f8c24d4d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.114454 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities" (OuterVolumeSpecName: "utilities") pod "f2b20a33-f426-426f-9657-3d11d403629f" (UID: "f2b20a33-f426-426f-9657-3d11d403629f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.114632 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities" (OuterVolumeSpecName: "utilities") pod "d976374c-9adc-426a-9593-43e617e72281" (UID: "d976374c-9adc-426a-9593-43e617e72281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.117143 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities" (OuterVolumeSpecName: "utilities") pod "b5a378fe-18a6-4be0-8d56-eaddc377bd8b" (UID: "b5a378fe-18a6-4be0-8d56-eaddc377bd8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.117682 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm" (OuterVolumeSpecName: "kube-api-access-pqstm") pod "f3e3e7a7-a59e-4d12-8499-38ad4a72832d" (UID: "f3e3e7a7-a59e-4d12-8499-38ad4a72832d"). InnerVolumeSpecName "kube-api-access-pqstm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.120982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp" (OuterVolumeSpecName: "kube-api-access-956sp") pod "2255f06f-74ad-4308-9575-c04f8c24d4d5" (UID: "2255f06f-74ad-4308-9575-c04f8c24d4d5"). InnerVolumeSpecName "kube-api-access-956sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121068 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c" (OuterVolumeSpecName: "kube-api-access-8bn5c") pod "d976374c-9adc-426a-9593-43e617e72281" (UID: "d976374c-9adc-426a-9593-43e617e72281"). InnerVolumeSpecName "kube-api-access-8bn5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121140 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg" (OuterVolumeSpecName: "kube-api-access-ggtzg") pod "f2b20a33-f426-426f-9657-3d11d403629f" (UID: "f2b20a33-f426-426f-9657-3d11d403629f"). InnerVolumeSpecName "kube-api-access-ggtzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121194 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p" (OuterVolumeSpecName: "kube-api-access-82n2p") pod "b5a378fe-18a6-4be0-8d56-eaddc377bd8b" (UID: "b5a378fe-18a6-4be0-8d56-eaddc377bd8b"). InnerVolumeSpecName "kube-api-access-82n2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.121340 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f3e3e7a7-a59e-4d12-8499-38ad4a72832d" (UID: "f3e3e7a7-a59e-4d12-8499-38ad4a72832d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.146520 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2255f06f-74ad-4308-9575-c04f8c24d4d5" (UID: "2255f06f-74ad-4308-9575-c04f8c24d4d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.180735 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5a378fe-18a6-4be0-8d56-eaddc377bd8b" (UID: "b5a378fe-18a6-4be0-8d56-eaddc377bd8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.184226 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2b20a33-f426-426f-9657-3d11d403629f" (UID: "f2b20a33-f426-426f-9657-3d11d403629f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219242 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219284 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219297 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggtzg\" (UniqueName: \"kubernetes.io/projected/f2b20a33-f426-426f-9657-3d11d403629f-kube-api-access-ggtzg\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219306 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219315 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82n2p\" (UniqueName: \"kubernetes.io/projected/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-kube-api-access-82n2p\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219323 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219330 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bn5c\" (UniqueName: \"kubernetes.io/projected/d976374c-9adc-426a-9593-43e617e72281-kube-api-access-8bn5c\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219338 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a378fe-18a6-4be0-8d56-eaddc377bd8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219375 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b20a33-f426-426f-9657-3d11d403629f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219385 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219397 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqstm\" (UniqueName: \"kubernetes.io/projected/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-kube-api-access-pqstm\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219409 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956sp\" (UniqueName: \"kubernetes.io/projected/2255f06f-74ad-4308-9575-c04f8c24d4d5-kube-api-access-956sp\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219420 4913 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3e3e7a7-a59e-4d12-8499-38ad4a72832d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.219432 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2255f06f-74ad-4308-9575-c04f8c24d4d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.230032 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.240170 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mmmzm"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.285529 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d976374c-9adc-426a-9593-43e617e72281" (UID: "d976374c-9adc-426a-9593-43e617e72281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320409 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") pod \"92ab7368-d5ff-4ecc-846a-96791a313bce\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320487 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") pod \"92ab7368-d5ff-4ecc-846a-96791a313bce\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320574 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") pod \"92ab7368-d5ff-4ecc-846a-96791a313bce\" (UID: \"92ab7368-d5ff-4ecc-846a-96791a313bce\") " Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.320835 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d976374c-9adc-426a-9593-43e617e72281-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.321556 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities" (OuterVolumeSpecName: "utilities") pod "92ab7368-d5ff-4ecc-846a-96791a313bce" (UID: "92ab7368-d5ff-4ecc-846a-96791a313bce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.323329 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8" (OuterVolumeSpecName: "kube-api-access-r9zd8") pod "92ab7368-d5ff-4ecc-846a-96791a313bce" (UID: "92ab7368-d5ff-4ecc-846a-96791a313bce"). InnerVolumeSpecName "kube-api-access-r9zd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.371139 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92ab7368-d5ff-4ecc-846a-96791a313bce" (UID: "92ab7368-d5ff-4ecc-846a-96791a313bce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.422085 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.422121 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92ab7368-d5ff-4ecc-846a-96791a313bce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.422134 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9zd8\" (UniqueName: \"kubernetes.io/projected/92ab7368-d5ff-4ecc-846a-96791a313bce-kube-api-access-r9zd8\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503762 4913 generic.go:334] "Generic (PLEG): container finished" podID="f2b20a33-f426-426f-9657-3d11d403629f" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503796 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503840 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvlq6" event={"ID":"f2b20a33-f426-426f-9657-3d11d403629f","Type":"ContainerDied","Data":"6a8e2ac63fb84aa47578d17a8198d55bdad0c3fb7a2896b7a8bd7e3526aa7149"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503851 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvlq6" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.503870 4913 scope.go:117] "RemoveContainer" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507561 4913 generic.go:334] "Generic (PLEG): container finished" podID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507617 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507653 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fszdj" event={"ID":"b5a378fe-18a6-4be0-8d56-eaddc377bd8b","Type":"ContainerDied","Data":"eb2a4164400078d5e47383eb8825b8a46cafb4407ff81311bae02795bf3351aa"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.507672 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fszdj" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510201 4913 generic.go:334] "Generic (PLEG): container finished" podID="d976374c-9adc-426a-9593-43e617e72281" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510241 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpc4m" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510272 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.510294 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpc4m" event={"ID":"d976374c-9adc-426a-9593-43e617e72281","Type":"ContainerDied","Data":"cd166342c5c7d3828aa55b99bbc4cb3c9d3bdf94c3c49466b8128a155f8f51f9"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512553 4913 generic.go:334] "Generic (PLEG): container finished" podID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512627 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerDied","Data":"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512646 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" event={"ID":"f3e3e7a7-a59e-4d12-8499-38ad4a72832d","Type":"ContainerDied","Data":"cb3977af5e68023242bf0ddc97686fb8058507b9de52582bb7d762e6b09403d5"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.512706 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qjrx8" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.517823 4913 generic.go:334] "Generic (PLEG): container finished" podID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.518074 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlb56" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.518754 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.518822 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlb56" event={"ID":"2255f06f-74ad-4308-9575-c04f8c24d4d5","Type":"ContainerDied","Data":"9d16632deb9ee398509b3e2cbaf8f4e0a65526fa7ef3942cc5e99a9c2c336883"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.529114 4913 generic.go:334] "Generic (PLEG): container finished" podID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" exitCode=0 Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.529311 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffbwk" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.536276 4913 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mmmzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.536318 4913 scope.go:117] "RemoveContainer" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.536344 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" podUID="9850b956-f0a1-4e29-b5c2-703b0aa7b697" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560027 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560076 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560106 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffbwk" event={"ID":"92ab7368-d5ff-4ecc-846a-96791a313bce","Type":"ContainerDied","Data":"56ab7cdf728ac690777654ae4eaf5e6fc42307f0dee5ce8045bb907e80f0f634"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560124 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" event={"ID":"9850b956-f0a1-4e29-b5c2-703b0aa7b697","Type":"ContainerStarted","Data":"73c3d3da53f3d4e78dea8b18514995ad46e59b992a6bdecd72154ecb5e4a4cde"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.560164 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" event={"ID":"9850b956-f0a1-4e29-b5c2-703b0aa7b697","Type":"ContainerStarted","Data":"3fafd9712950a6993939b2c1d355ca5c639733783d48611accc373dd03498d3f"} Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.565167 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" podStartSLOduration=1.5651457020000001 podStartE2EDuration="1.565145702s" podCreationTimestamp="2026-01-21 06:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:06.561450949 +0000 UTC m=+296.357810632" watchObservedRunningTime="2026-01-21 06:40:06.565145702 +0000 UTC m=+296.361505395" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.607124 4913 scope.go:117] "RemoveContainer" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.625656 4913 scope.go:117] "RemoveContainer" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.627791 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108\": container with ID starting with f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108 not found: ID does not exist" containerID="f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.627835 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108"} err="failed to get container status \"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108\": rpc error: code = NotFound desc = could not find container \"f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108\": container with ID starting with f61dfec79d7832187a0f2c424b3c232454a3063e91dc9055341ad189a61af108 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.627865 4913 scope.go:117] "RemoveContainer" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.628084 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033\": container with ID starting with 7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033 not found: ID does not exist" containerID="7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628100 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033"} err="failed to get container status \"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033\": rpc error: code = NotFound desc = could not find container \"7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033\": container with ID starting with 7be39342d712cc741d54093d3bee24f9996287751c8b7a29cfbae0951a1df033 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628113 4913 scope.go:117] "RemoveContainer" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.628622 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9\": container with ID starting with 6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9 not found: ID does not exist" containerID="6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628672 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9"} err="failed to get container status \"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9\": rpc error: code = NotFound desc = could not find container \"6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9\": container with ID starting with 6b69cfb8db513351193d41f0524b5ca00e80d18827c7a55811a092f1880ef0f9 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.628738 4913 scope.go:117] "RemoveContainer" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.635878 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.639823 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fszdj"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.647107 4913 scope.go:117] "RemoveContainer" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.664628 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.671035 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mvlq6"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.674834 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.678611 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlb56"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.682685 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.686022 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qjrx8"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.686360 4913 scope.go:117] "RemoveContainer" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.692254 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.697628 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpc4m"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.701198 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.704769 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ffbwk"] Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.709082 4913 scope.go:117] "RemoveContainer" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.710306 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5\": container with ID starting with b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5 not found: ID does not exist" containerID="b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.710405 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5"} err="failed to get container status \"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5\": rpc error: code = NotFound desc = could not find container \"b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5\": container with ID starting with b72f98cd6b5be298262fae1538087abc2183e9ab7ba4d008162e8f09d26502d5 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.710508 4913 scope.go:117] "RemoveContainer" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.711822 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf\": container with ID starting with eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf not found: ID does not exist" containerID="eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.711861 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf"} err="failed to get container status \"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf\": rpc error: code = NotFound desc = could not find container \"eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf\": container with ID starting with eca05e9be5286d3ecdf8a74af66911d7be37967e92554eb0db4245f853992baf not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.711903 4913 scope.go:117] "RemoveContainer" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.712427 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635\": container with ID starting with c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635 not found: ID does not exist" containerID="c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.712533 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635"} err="failed to get container status \"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635\": rpc error: code = NotFound desc = could not find container \"c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635\": container with ID starting with c66102f1ff265578a9faea1da6afbb04b05ed7d709386a14af423a35bf17b635 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.712620 4913 scope.go:117] "RemoveContainer" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.726885 4913 scope.go:117] "RemoveContainer" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.740309 4913 scope.go:117] "RemoveContainer" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.757937 4913 scope.go:117] "RemoveContainer" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.758372 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9\": container with ID starting with f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9 not found: ID does not exist" containerID="f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.758465 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9"} err="failed to get container status \"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9\": rpc error: code = NotFound desc = could not find container \"f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9\": container with ID starting with f490428d838ab70825e9bdbffda51586b25c053b567caf820009e8d4bafae6b9 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.758580 4913 scope.go:117] "RemoveContainer" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.758958 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be\": container with ID starting with 79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be not found: ID does not exist" containerID="79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759036 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be"} err="failed to get container status \"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be\": rpc error: code = NotFound desc = could not find container \"79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be\": container with ID starting with 79fb4425cd4a8a5f062be08bff934db8b91aae4a4ade7f34c7e79732065927be not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759101 4913 scope.go:117] "RemoveContainer" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.759663 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9\": container with ID starting with a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9 not found: ID does not exist" containerID="a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759708 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9"} err="failed to get container status \"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9\": rpc error: code = NotFound desc = could not find container \"a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9\": container with ID starting with a22ccd2785d7ff604f82c7c998d988a7614e426c31f0eae41f13b4b61be718c9 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.759754 4913 scope.go:117] "RemoveContainer" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.773639 4913 scope.go:117] "RemoveContainer" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.774029 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3\": container with ID starting with ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3 not found: ID does not exist" containerID="ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.774123 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3"} err="failed to get container status \"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3\": rpc error: code = NotFound desc = could not find container \"ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3\": container with ID starting with ab3a95e4879dfdead98a3924a946f7c5df2ec19b9fd0c1f391507f19e6d6ece3 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.774192 4913 scope.go:117] "RemoveContainer" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.787934 4913 scope.go:117] "RemoveContainer" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.799907 4913 scope.go:117] "RemoveContainer" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812034 4913 scope.go:117] "RemoveContainer" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.812419 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c\": container with ID starting with 10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c not found: ID does not exist" containerID="10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812446 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c"} err="failed to get container status \"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c\": rpc error: code = NotFound desc = could not find container \"10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c\": container with ID starting with 10d2f6db2ceb513f2fbc9dc8afd52f64a25b960272bcf20532ba2c1483ce055c not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812472 4913 scope.go:117] "RemoveContainer" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.812932 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc\": container with ID starting with d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc not found: ID does not exist" containerID="d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.812971 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc"} err="failed to get container status \"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc\": rpc error: code = NotFound desc = could not find container \"d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc\": container with ID starting with d749051d8a5582f6c517bb567051f2c32fdd09ee0daaedb900170c8c482c35cc not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.813003 4913 scope.go:117] "RemoveContainer" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.813553 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7\": container with ID starting with afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7 not found: ID does not exist" containerID="afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.813575 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7"} err="failed to get container status \"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7\": rpc error: code = NotFound desc = could not find container \"afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7\": container with ID starting with afb3d48af76ad5c21c6fc56350ddf8e8c197b7ff991d562bd60cd32d0b1caee7 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.813613 4913 scope.go:117] "RemoveContainer" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.824815 4913 scope.go:117] "RemoveContainer" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.840538 4913 scope.go:117] "RemoveContainer" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.855735 4913 scope.go:117] "RemoveContainer" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.856088 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2\": container with ID starting with 791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2 not found: ID does not exist" containerID="791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856127 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2"} err="failed to get container status \"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2\": rpc error: code = NotFound desc = could not find container \"791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2\": container with ID starting with 791317c4009e72163f1489bf32990c0e220ca15eedbc79697deacf4a4f740cf2 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856154 4913 scope.go:117] "RemoveContainer" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.856654 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28\": container with ID starting with dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28 not found: ID does not exist" containerID="dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856691 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28"} err="failed to get container status \"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28\": rpc error: code = NotFound desc = could not find container \"dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28\": container with ID starting with dc64671b047b26794fa7beffe1452c7effba2073e128d7cba4094e8c3cd58f28 not found: ID does not exist" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.856754 4913 scope.go:117] "RemoveContainer" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" Jan 21 06:40:06 crc kubenswrapper[4913]: E0121 06:40:06.856986 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148\": container with ID starting with 1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148 not found: ID does not exist" containerID="1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148" Jan 21 06:40:06 crc kubenswrapper[4913]: I0121 06:40:06.857011 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148"} err="failed to get container status \"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148\": rpc error: code = NotFound desc = could not find container \"1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148\": container with ID starting with 1e488e4d28434e57822e9eb359467f0a20b9a3a0e6f82e85617d302ce6ea7148 not found: ID does not exist" Jan 21 06:40:07 crc kubenswrapper[4913]: I0121 06:40:07.558264 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mmmzm" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.548762 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" path="/var/lib/kubelet/pods/2255f06f-74ad-4308-9575-c04f8c24d4d5/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.550946 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" path="/var/lib/kubelet/pods/92ab7368-d5ff-4ecc-846a-96791a313bce/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.552846 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" path="/var/lib/kubelet/pods/b5a378fe-18a6-4be0-8d56-eaddc377bd8b/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.555464 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d976374c-9adc-426a-9593-43e617e72281" path="/var/lib/kubelet/pods/d976374c-9adc-426a-9593-43e617e72281/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.556959 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b20a33-f426-426f-9657-3d11d403629f" path="/var/lib/kubelet/pods/f2b20a33-f426-426f-9657-3d11d403629f/volumes" Jan 21 06:40:08 crc kubenswrapper[4913]: I0121 06:40:08.559277 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" path="/var/lib/kubelet/pods/f3e3e7a7-a59e-4d12-8499-38ad4a72832d/volumes" Jan 21 06:40:10 crc kubenswrapper[4913]: I0121 06:40:10.361731 4913 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 06:40:14 crc kubenswrapper[4913]: I0121 06:40:14.949552 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.366513 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.367347 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" containerID="cri-o://e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5" gracePeriod=30 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.460116 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.460533 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" containerID="cri-o://a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b" gracePeriod=30 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.626955 4913 generic.go:334] "Generic (PLEG): container finished" podID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerID="e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5" exitCode=0 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.627028 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerDied","Data":"e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5"} Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.628869 4913 generic.go:334] "Generic (PLEG): container finished" podID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerID="a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b" exitCode=0 Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.628913 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerDied","Data":"a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b"} Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.739165 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.815723 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918530 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918600 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918637 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918681 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918704 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918720 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918734 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918759 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") pod \"527ef351-fb35-4f58-ae7b-d410c23496c6\" (UID: \"527ef351-fb35-4f58-ae7b-d410c23496c6\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.918839 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") pod \"82ebe95b-4e82-49aa-8693-52c0998ec7de\" (UID: \"82ebe95b-4e82-49aa-8693-52c0998ec7de\") " Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.919352 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.919825 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.920050 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config" (OuterVolumeSpecName: "config") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.920114 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config" (OuterVolumeSpecName: "config") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.920643 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca" (OuterVolumeSpecName: "client-ca") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.924471 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.924621 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.925106 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t" (OuterVolumeSpecName: "kube-api-access-bqr5t") pod "82ebe95b-4e82-49aa-8693-52c0998ec7de" (UID: "82ebe95b-4e82-49aa-8693-52c0998ec7de"). InnerVolumeSpecName "kube-api-access-bqr5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:19 crc kubenswrapper[4913]: I0121 06:40:19.925502 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq" (OuterVolumeSpecName: "kube-api-access-nhjlq") pod "527ef351-fb35-4f58-ae7b-d410c23496c6" (UID: "527ef351-fb35-4f58-ae7b-d410c23496c6"). InnerVolumeSpecName "kube-api-access-nhjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.020867 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82ebe95b-4e82-49aa-8693-52c0998ec7de-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021014 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021091 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/527ef351-fb35-4f58-ae7b-d410c23496c6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021157 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhjlq\" (UniqueName: \"kubernetes.io/projected/527ef351-fb35-4f58-ae7b-d410c23496c6-kube-api-access-nhjlq\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021216 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021271 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021326 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqr5t\" (UniqueName: \"kubernetes.io/projected/82ebe95b-4e82-49aa-8693-52c0998ec7de-kube-api-access-bqr5t\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021629 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82ebe95b-4e82-49aa-8693-52c0998ec7de-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.021697 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/527ef351-fb35-4f58-ae7b-d410c23496c6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559000 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559309 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559339 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559382 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559389 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559395 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559402 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559429 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559436 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559443 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559449 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559473 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559479 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559522 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559529 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559536 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559543 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559551 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559556 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559565 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559571 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559578 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559599 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559606 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559612 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559624 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559629 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559638 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559644 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559651 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559657 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="extract-content" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559665 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559671 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559678 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559684 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: E0121 06:40:20.559691 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559697 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="extract-utilities" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559786 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2255f06f-74ad-4308-9575-c04f8c24d4d5" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559796 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b20a33-f426-426f-9657-3d11d403629f" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559808 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a378fe-18a6-4be0-8d56-eaddc377bd8b" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559815 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="92ab7368-d5ff-4ecc-846a-96791a313bce" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559823 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" containerName="route-controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559830 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e3e7a7-a59e-4d12-8499-38ad4a72832d" containerName="marketplace-operator" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559838 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="d976374c-9adc-426a-9593-43e617e72281" containerName="registry-server" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.559846 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" containerName="controller-manager" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.561426 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.567527 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.568272 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.579134 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.579184 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.635636 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" event={"ID":"527ef351-fb35-4f58-ae7b-d410c23496c6","Type":"ContainerDied","Data":"67d6415346a44324c1ea19c76e9d0bfa267f53b8f3aa0e917d549d642e200abc"} Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.635662 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bclp4" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.635682 4913 scope.go:117] "RemoveContainer" containerID="e0fb25a613adfea6f7ac86ba60e0bf6f84329c9ddc4e60d2b7f94bc10001b0f5" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.637908 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" event={"ID":"82ebe95b-4e82-49aa-8693-52c0998ec7de","Type":"ContainerDied","Data":"ef859a9ce5257c54c90bc7a069572614d5e182e82cd48180d7700276e0fbcbea"} Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.637948 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.653509 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.656952 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bclp4"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.657017 4913 scope.go:117] "RemoveContainer" containerID="a991521fdebd53fcfdb11ad2d1d02cc08d8f7880bc37d9cc4a09f1c6afa7cf1b" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.666296 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.666498 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tbgjj"] Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728011 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728273 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728373 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728518 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728637 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728719 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728794 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728867 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.728957 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.830486 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.831409 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.832198 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.832569 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.833538 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.834515 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.835883 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.833223 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.832121 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.835435 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.836037 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.834470 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.836326 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.836886 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.837924 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.846090 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.851062 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"route-controller-manager-5f764767cf-zhqgh\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.862202 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"controller-manager-f4848bd7c-qsrhg\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.919991 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.930951 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:20 crc kubenswrapper[4913]: I0121 06:40:20.990069 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.150358 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:21 crc kubenswrapper[4913]: W0121 06:40:21.160334 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod613ee342_c0db_4722_92fa_633a60ecbb41.slice/crio-e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51 WatchSource:0}: Error finding container e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51: Status 404 returned error can't find the container with id e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51 Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.180337 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:21 crc kubenswrapper[4913]: W0121 06:40:21.190365 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf688d1c5_45f1_4e55_a987_df6cf2b954f4.slice/crio-31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15 WatchSource:0}: Error finding container 31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15: Status 404 returned error can't find the container with id 31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15 Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.645246 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerStarted","Data":"85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.645599 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerStarted","Data":"31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.647309 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.649390 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerStarted","Data":"11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.649435 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerStarted","Data":"e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51"} Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.649970 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.651798 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.656203 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.682073 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" podStartSLOduration=2.682056076 podStartE2EDuration="2.682056076s" podCreationTimestamp="2026-01-21 06:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:21.668103879 +0000 UTC m=+311.464463552" watchObservedRunningTime="2026-01-21 06:40:21.682056076 +0000 UTC m=+311.478415749" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.724700 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" podStartSLOduration=2.724684959 podStartE2EDuration="2.724684959s" podCreationTimestamp="2026-01-21 06:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:21.684159514 +0000 UTC m=+311.480519187" watchObservedRunningTime="2026-01-21 06:40:21.724684959 +0000 UTC m=+311.521044632" Jan 21 06:40:21 crc kubenswrapper[4913]: I0121 06:40:21.956804 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 06:40:22 crc kubenswrapper[4913]: I0121 06:40:22.532697 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527ef351-fb35-4f58-ae7b-d410c23496c6" path="/var/lib/kubelet/pods/527ef351-fb35-4f58-ae7b-d410c23496c6/volumes" Jan 21 06:40:22 crc kubenswrapper[4913]: I0121 06:40:22.533502 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ebe95b-4e82-49aa-8693-52c0998ec7de" path="/var/lib/kubelet/pods/82ebe95b-4e82-49aa-8693-52c0998ec7de/volumes" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.240536 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.241001 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" containerID="cri-o://11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc" gracePeriod=30 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.258027 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.258254 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" containerID="cri-o://85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a" gracePeriod=30 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.685089 4913 generic.go:334] "Generic (PLEG): container finished" podID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerID="85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a" exitCode=0 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.685159 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerDied","Data":"85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a"} Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.686652 4913 generic.go:334] "Generic (PLEG): container finished" podID="613ee342-c0db-4722-92fa-633a60ecbb41" containerID="11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc" exitCode=0 Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.686670 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerDied","Data":"11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc"} Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.792748 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915533 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915575 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915618 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.915653 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") pod \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\" (UID: \"f688d1c5-45f1-4e55-a987-df6cf2b954f4\") " Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.916294 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca" (OuterVolumeSpecName: "client-ca") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.916396 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config" (OuterVolumeSpecName: "config") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.920443 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs" (OuterVolumeSpecName: "kube-api-access-xhqzs") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "kube-api-access-xhqzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.920519 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f688d1c5-45f1-4e55-a987-df6cf2b954f4" (UID: "f688d1c5-45f1-4e55-a987-df6cf2b954f4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:26 crc kubenswrapper[4913]: I0121 06:40:26.927887 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016916 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhqzs\" (UniqueName: \"kubernetes.io/projected/f688d1c5-45f1-4e55-a987-df6cf2b954f4-kube-api-access-xhqzs\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016945 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016955 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f688d1c5-45f1-4e55-a987-df6cf2b954f4-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.016963 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f688d1c5-45f1-4e55-a987-df6cf2b954f4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118331 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118382 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118434 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118471 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.118493 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") pod \"613ee342-c0db-4722-92fa-633a60ecbb41\" (UID: \"613ee342-c0db-4722-92fa-633a60ecbb41\") " Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.119254 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca" (OuterVolumeSpecName: "client-ca") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.119296 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.119467 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config" (OuterVolumeSpecName: "config") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.124037 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.124119 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8" (OuterVolumeSpecName: "kube-api-access-45sb8") pod "613ee342-c0db-4722-92fa-633a60ecbb41" (UID: "613ee342-c0db-4722-92fa-633a60ecbb41"). InnerVolumeSpecName "kube-api-access-45sb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220166 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45sb8\" (UniqueName: \"kubernetes.io/projected/613ee342-c0db-4722-92fa-633a60ecbb41-kube-api-access-45sb8\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220234 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/613ee342-c0db-4722-92fa-633a60ecbb41-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220245 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220253 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.220262 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613ee342-c0db-4722-92fa-633a60ecbb41-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.693737 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" event={"ID":"f688d1c5-45f1-4e55-a987-df6cf2b954f4","Type":"ContainerDied","Data":"31f3950f1e126310a093f8d1fedc028c2d7f74bc7aded4c498f470c478ff4d15"} Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.693786 4913 scope.go:117] "RemoveContainer" containerID="85435cc330e894ecbba077f1bc8917cf7eef9346f872884f1bf571d56f017a2a" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.694552 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.695707 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.695709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4848bd7c-qsrhg" event={"ID":"613ee342-c0db-4722-92fa-633a60ecbb41","Type":"ContainerDied","Data":"e6951398b26b1be8dba895813e2ad0c3e98e301ef6d9b0ed355ac9b675675b51"} Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.714750 4913 scope.go:117] "RemoveContainer" containerID="11dfa8f82f25b1d32d2297a76f14050d1f3d9d3fc7f6f02294f4f057ec07d6cc" Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.726395 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.734072 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f4848bd7c-qsrhg"] Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.745305 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:27 crc kubenswrapper[4913]: I0121 06:40:27.750876 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f764767cf-zhqgh"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.537903 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" path="/var/lib/kubelet/pods/613ee342-c0db-4722-92fa-633a60ecbb41/volumes" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.539358 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" path="/var/lib/kubelet/pods/f688d1c5-45f1-4e55-a987-df6cf2b954f4/volumes" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.566799 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:40:28 crc kubenswrapper[4913]: E0121 06:40:28.567223 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567263 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: E0121 06:40:28.567312 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567331 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567532 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="613ee342-c0db-4722-92fa-633a60ecbb41" containerName="controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.567579 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f688d1c5-45f1-4e55-a987-df6cf2b954f4" containerName="route-controller-manager" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.568223 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571096 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571425 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571708 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.571977 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.572215 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.572401 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.572628 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.573279 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.576686 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.576916 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577771 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577809 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577880 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.577902 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.578065 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.588713 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.592632 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.739032 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741106 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741283 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741462 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741696 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.741877 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.742043 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.742234 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.742395 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843578 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843726 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843808 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843845 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843906 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843950 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.843984 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.844018 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.844055 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.845558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.846699 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.848164 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.848564 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.848839 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.850972 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.853055 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.869581 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"route-controller-manager-76ff6d74c8-6fgg4\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.870986 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"controller-manager-bd9d8574d-grvnz\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.894965 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:28 crc kubenswrapper[4913]: I0121 06:40:28.903812 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.126577 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:40:29 crc kubenswrapper[4913]: W0121 06:40:29.134219 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799094d2_a718_4044_b16d_8a011cc3ecaa.slice/crio-31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5 WatchSource:0}: Error finding container 31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5: Status 404 returned error can't find the container with id 31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5 Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.388385 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:29 crc kubenswrapper[4913]: W0121 06:40:29.396000 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b31176e_1ac2_453f_8750_e2524da5cb9b.slice/crio-d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833 WatchSource:0}: Error finding container d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833: Status 404 returned error can't find the container with id d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833 Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.711137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerStarted","Data":"3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.711435 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerStarted","Data":"d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.711454 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.715727 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerStarted","Data":"f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.715788 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerStarted","Data":"31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5"} Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.716031 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.721928 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.734269 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" podStartSLOduration=3.734246194 podStartE2EDuration="3.734246194s" podCreationTimestamp="2026-01-21 06:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:29.728751221 +0000 UTC m=+319.525110934" watchObservedRunningTime="2026-01-21 06:40:29.734246194 +0000 UTC m=+319.530605897" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.753116 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" podStartSLOduration=3.753096656 podStartE2EDuration="3.753096656s" podCreationTimestamp="2026-01-21 06:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:29.75033551 +0000 UTC m=+319.546695183" watchObservedRunningTime="2026-01-21 06:40:29.753096656 +0000 UTC m=+319.549456319" Jan 21 06:40:29 crc kubenswrapper[4913]: I0121 06:40:29.889459 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:30 crc kubenswrapper[4913]: I0121 06:40:30.425916 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 06:40:38 crc kubenswrapper[4913]: I0121 06:40:38.319190 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:40:38 crc kubenswrapper[4913]: I0121 06:40:38.319896 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.403325 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.404200 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" containerID="cri-o://3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08" gracePeriod=30 Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.773001 4913 generic.go:334] "Generic (PLEG): container finished" podID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerID="3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08" exitCode=0 Jan 21 06:40:39 crc kubenswrapper[4913]: I0121 06:40:39.773089 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerDied","Data":"3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08"} Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.532174 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.571865 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h"] Jan 21 06:40:40 crc kubenswrapper[4913]: E0121 06:40:40.572150 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.572168 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.572295 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" containerName="route-controller-manager" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.572767 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591660 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-config\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591737 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4412c4f-3b57-428e-8257-1bd0e664a1ad-serving-cert\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591787 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xpc\" (UniqueName: \"kubernetes.io/projected/e4412c4f-3b57-428e-8257-1bd0e664a1ad-kube-api-access-x6xpc\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.591848 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-client-ca\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.603336 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h"] Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.692450 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.692890 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693029 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693225 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") pod \"5b31176e-1ac2-453f-8750-e2524da5cb9b\" (UID: \"5b31176e-1ac2-453f-8750-e2524da5cb9b\") " Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693501 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4412c4f-3b57-428e-8257-1bd0e664a1ad-serving-cert\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693613 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config" (OuterVolumeSpecName: "config") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693753 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xpc\" (UniqueName: \"kubernetes.io/projected/e4412c4f-3b57-428e-8257-1bd0e664a1ad-kube-api-access-x6xpc\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693879 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-client-ca\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.694062 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-config\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.694198 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.693548 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.695389 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-config\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.699915 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e4412c4f-3b57-428e-8257-1bd0e664a1ad-client-ca\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.700168 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.700257 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v" (OuterVolumeSpecName: "kube-api-access-vgk7v") pod "5b31176e-1ac2-453f-8750-e2524da5cb9b" (UID: "5b31176e-1ac2-453f-8750-e2524da5cb9b"). InnerVolumeSpecName "kube-api-access-vgk7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.701869 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4412c4f-3b57-428e-8257-1bd0e664a1ad-serving-cert\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.711409 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xpc\" (UniqueName: \"kubernetes.io/projected/e4412c4f-3b57-428e-8257-1bd0e664a1ad-kube-api-access-x6xpc\") pod \"route-controller-manager-d7659b446-6k28h\" (UID: \"e4412c4f-3b57-428e-8257-1bd0e664a1ad\") " pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.780198 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" event={"ID":"5b31176e-1ac2-453f-8750-e2524da5cb9b","Type":"ContainerDied","Data":"d43e0ee2e79d0569c35f4b0ce3edb169ba4b011c650e633d237693a8820e1833"} Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.780265 4913 scope.go:117] "RemoveContainer" containerID="3dd985664eeed11ee5d1c18284bc3123d2bd2409e3e99dba66f40acaa2032a08" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.780261 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.795242 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b31176e-1ac2-453f-8750-e2524da5cb9b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.795270 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b31176e-1ac2-453f-8750-e2524da5cb9b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.795284 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgk7v\" (UniqueName: \"kubernetes.io/projected/5b31176e-1ac2-453f-8750-e2524da5cb9b-kube-api-access-vgk7v\") on node \"crc\" DevicePath \"\"" Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.810295 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.813314 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76ff6d74c8-6fgg4"] Jan 21 06:40:40 crc kubenswrapper[4913]: I0121 06:40:40.901093 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.352251 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h"] Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.790236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" event={"ID":"e4412c4f-3b57-428e-8257-1bd0e664a1ad","Type":"ContainerStarted","Data":"e7fd15f7e91eb86d6fc8070c6e67476f7d90077f660a5091b97b7b70b42a386f"} Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.790287 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" event={"ID":"e4412c4f-3b57-428e-8257-1bd0e664a1ad","Type":"ContainerStarted","Data":"ee749b927d29b8c81909378c1ab9c428c847c05bd99d1c87b2d5a7695e8b627e"} Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.790630 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:41 crc kubenswrapper[4913]: I0121 06:40:41.823708 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" podStartSLOduration=2.823683858 podStartE2EDuration="2.823683858s" podCreationTimestamp="2026-01-21 06:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:40:41.821783295 +0000 UTC m=+331.618142998" watchObservedRunningTime="2026-01-21 06:40:41.823683858 +0000 UTC m=+331.620043571" Jan 21 06:40:42 crc kubenswrapper[4913]: I0121 06:40:42.054687 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d7659b446-6k28h" Jan 21 06:40:42 crc kubenswrapper[4913]: I0121 06:40:42.539479 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b31176e-1ac2-453f-8750-e2524da5cb9b" path="/var/lib/kubelet/pods/5b31176e-1ac2-453f-8750-e2524da5cb9b/volumes" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.295251 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rp8wd"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.297471 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.301986 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.311039 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp8wd"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.440894 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cz6m\" (UniqueName: \"kubernetes.io/projected/a8ba24ca-c946-4684-817a-0ae5bada3ecd-kube-api-access-6cz6m\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.440960 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-catalog-content\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.441245 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-utilities\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.541743 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-catalog-content\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.541816 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-utilities\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.542249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-catalog-content\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.542603 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cz6m\" (UniqueName: \"kubernetes.io/projected/a8ba24ca-c946-4684-817a-0ae5bada3ecd-kube-api-access-6cz6m\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.542850 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8ba24ca-c946-4684-817a-0ae5bada3ecd-utilities\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.574813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cz6m\" (UniqueName: \"kubernetes.io/projected/a8ba24ca-c946-4684-817a-0ae5bada3ecd-kube-api-access-6cz6m\") pod \"certified-operators-rp8wd\" (UID: \"a8ba24ca-c946-4684-817a-0ae5bada3ecd\") " pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.613102 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.889128 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tmk45"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.892362 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.902027 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.908391 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmk45"] Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.949572 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-catalog-content\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.949741 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-utilities\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:44 crc kubenswrapper[4913]: I0121 06:40:44.949798 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bsl\" (UniqueName: \"kubernetes.io/projected/e81adc58-27d6-4087-9902-6e61aba9bfaa-kube-api-access-r7bsl\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.050819 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-utilities\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.050879 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bsl\" (UniqueName: \"kubernetes.io/projected/e81adc58-27d6-4087-9902-6e61aba9bfaa-kube-api-access-r7bsl\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.050984 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-catalog-content\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.052040 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-utilities\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.052052 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e81adc58-27d6-4087-9902-6e61aba9bfaa-catalog-content\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.074401 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bsl\" (UniqueName: \"kubernetes.io/projected/e81adc58-27d6-4087-9902-6e61aba9bfaa-kube-api-access-r7bsl\") pod \"community-operators-tmk45\" (UID: \"e81adc58-27d6-4087-9902-6e61aba9bfaa\") " pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.086382 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp8wd"] Jan 21 06:40:45 crc kubenswrapper[4913]: W0121 06:40:45.097135 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8ba24ca_c946_4684_817a_0ae5bada3ecd.slice/crio-da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f WatchSource:0}: Error finding container da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f: Status 404 returned error can't find the container with id da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.263355 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.695832 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tmk45"] Jan 21 06:40:45 crc kubenswrapper[4913]: W0121 06:40:45.697963 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode81adc58_27d6_4087_9902_6e61aba9bfaa.slice/crio-0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b WatchSource:0}: Error finding container 0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b: Status 404 returned error can't find the container with id 0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.818397 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerStarted","Data":"0b9fc3c6a2dbdf48cc98d44c5567138f169d3df4ec3e390a1e53046b5c79663b"} Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.822954 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8ba24ca-c946-4684-817a-0ae5bada3ecd" containerID="45e52ee2e29f70b7bb59c5aa4d34f2597d2529e6e5b2e2f0d9e05a4a1f9611ce" exitCode=0 Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.823021 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerDied","Data":"45e52ee2e29f70b7bb59c5aa4d34f2597d2529e6e5b2e2f0d9e05a4a1f9611ce"} Jan 21 06:40:45 crc kubenswrapper[4913]: I0121 06:40:45.823066 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerStarted","Data":"da2c5653d71b153aae062c256c9ce6be89bce9d97d10846b16e54f7989ba5d8f"} Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.688050 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-frpd4"] Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.691025 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.693495 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.708988 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-frpd4"] Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.834113 4913 generic.go:334] "Generic (PLEG): container finished" podID="e81adc58-27d6-4087-9902-6e61aba9bfaa" containerID="a52fe07e241638d9c176b96b1b1c8da237c3f0bbcfb011fc95b9f09991cda53f" exitCode=0 Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.834181 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerDied","Data":"a52fe07e241638d9c176b96b1b1c8da237c3f0bbcfb011fc95b9f09991cda53f"} Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.841542 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8ba24ca-c946-4684-817a-0ae5bada3ecd" containerID="4f2d57dc6e7c27c4341c60afc83885041a3478ddc89612b905dbfc2303bb30c1" exitCode=0 Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.841725 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerDied","Data":"4f2d57dc6e7c27c4341c60afc83885041a3478ddc89612b905dbfc2303bb30c1"} Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.876551 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-utilities\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.876677 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxps\" (UniqueName: \"kubernetes.io/projected/b6d83360-7a65-47b3-98df-42902962da8d-kube-api-access-4wxps\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.876718 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-catalog-content\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.977777 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-utilities\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.977933 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxps\" (UniqueName: \"kubernetes.io/projected/b6d83360-7a65-47b3-98df-42902962da8d-kube-api-access-4wxps\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.978447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-catalog-content\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.979192 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-catalog-content\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:46 crc kubenswrapper[4913]: I0121 06:40:46.979727 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d83360-7a65-47b3-98df-42902962da8d-utilities\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.013422 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxps\" (UniqueName: \"kubernetes.io/projected/b6d83360-7a65-47b3-98df-42902962da8d-kube-api-access-4wxps\") pod \"redhat-marketplace-frpd4\" (UID: \"b6d83360-7a65-47b3-98df-42902962da8d\") " pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.287442 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pp6lf"] Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.290406 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.294689 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.307525 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.312185 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pp6lf"] Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.486510 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvb79\" (UniqueName: \"kubernetes.io/projected/4fd9a0ea-0344-4e90-87f0-34a568804f80-kube-api-access-dvb79\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.487038 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-catalog-content\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.487152 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-utilities\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.589730 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvb79\" (UniqueName: \"kubernetes.io/projected/4fd9a0ea-0344-4e90-87f0-34a568804f80-kube-api-access-dvb79\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.589843 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-catalog-content\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.589925 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-utilities\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.591691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-utilities\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.591881 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fd9a0ea-0344-4e90-87f0-34a568804f80-catalog-content\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.621114 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvb79\" (UniqueName: \"kubernetes.io/projected/4fd9a0ea-0344-4e90-87f0-34a568804f80-kube-api-access-dvb79\") pod \"redhat-operators-pp6lf\" (UID: \"4fd9a0ea-0344-4e90-87f0-34a568804f80\") " pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.632000 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.848237 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerStarted","Data":"9945b48a1782babeba1f207628c5a2bceb7366fc4e50cda0064dba34b9b61e3e"} Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.850462 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp8wd" event={"ID":"a8ba24ca-c946-4684-817a-0ae5bada3ecd","Type":"ContainerStarted","Data":"8e0561a75daf86567b4a196af0f68d6f75bf5f9b1c147cb955c7facc99271e17"} Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.889449 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-frpd4"] Jan 21 06:40:47 crc kubenswrapper[4913]: I0121 06:40:47.904252 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rp8wd" podStartSLOduration=2.469681473 podStartE2EDuration="3.904236835s" podCreationTimestamp="2026-01-21 06:40:44 +0000 UTC" firstStartedPulling="2026-01-21 06:40:45.825150152 +0000 UTC m=+335.621509855" lastFinishedPulling="2026-01-21 06:40:47.259705504 +0000 UTC m=+337.056065217" observedRunningTime="2026-01-21 06:40:47.903724221 +0000 UTC m=+337.700083914" watchObservedRunningTime="2026-01-21 06:40:47.904236835 +0000 UTC m=+337.700596508" Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.060165 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pp6lf"] Jan 21 06:40:48 crc kubenswrapper[4913]: W0121 06:40:48.096461 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd9a0ea_0344_4e90_87f0_34a568804f80.slice/crio-f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25 WatchSource:0}: Error finding container f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25: Status 404 returned error can't find the container with id f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.857066 4913 generic.go:334] "Generic (PLEG): container finished" podID="4fd9a0ea-0344-4e90-87f0-34a568804f80" containerID="c1c9628ec256da7c4bd3c2b80145eaa4ffc63343ceeb07fffbacebe0261eaae8" exitCode=0 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.857165 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerDied","Data":"c1c9628ec256da7c4bd3c2b80145eaa4ffc63343ceeb07fffbacebe0261eaae8"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.857395 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerStarted","Data":"f32a155c6268e30fedf5fc68e1e52a94f09ca747f5f1003efcaffd901cacfb25"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.859327 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d83360-7a65-47b3-98df-42902962da8d" containerID="667334d76a9d7bd122e195a230ff4f34811cd491a6beca89ebe519edbbf4892e" exitCode=0 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.859358 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerDied","Data":"667334d76a9d7bd122e195a230ff4f34811cd491a6beca89ebe519edbbf4892e"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.859390 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerStarted","Data":"d1b20acf8549a3b7da61ace37541771a495849a99a5c19d6aa33d057d8407554"} Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.863524 4913 generic.go:334] "Generic (PLEG): container finished" podID="e81adc58-27d6-4087-9902-6e61aba9bfaa" containerID="9945b48a1782babeba1f207628c5a2bceb7366fc4e50cda0064dba34b9b61e3e" exitCode=0 Jan 21 06:40:48 crc kubenswrapper[4913]: I0121 06:40:48.863580 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerDied","Data":"9945b48a1782babeba1f207628c5a2bceb7366fc4e50cda0064dba34b9b61e3e"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.871384 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerStarted","Data":"020d57f9dcefc7a806ce9c2910a2cbf1097007d57f04c2232488c6d3125f10b9"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.874295 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tmk45" event={"ID":"e81adc58-27d6-4087-9902-6e61aba9bfaa","Type":"ContainerStarted","Data":"fcacc05ede6e8fafd05a06a94622a0def2b5d20848fe6d676a116c77b796bd4c"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.876187 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerStarted","Data":"ac586c97f3e4e17cb3c0a20832d1b157caadc90b3ccbb578d1e71b7770546a8a"} Jan 21 06:40:49 crc kubenswrapper[4913]: I0121 06:40:49.920131 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tmk45" podStartSLOduration=3.410830549 podStartE2EDuration="5.920114283s" podCreationTimestamp="2026-01-21 06:40:44 +0000 UTC" firstStartedPulling="2026-01-21 06:40:46.83665235 +0000 UTC m=+336.633012023" lastFinishedPulling="2026-01-21 06:40:49.345936064 +0000 UTC m=+339.142295757" observedRunningTime="2026-01-21 06:40:49.917335116 +0000 UTC m=+339.713694799" watchObservedRunningTime="2026-01-21 06:40:49.920114283 +0000 UTC m=+339.716473956" Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.886715 4913 generic.go:334] "Generic (PLEG): container finished" podID="4fd9a0ea-0344-4e90-87f0-34a568804f80" containerID="ac586c97f3e4e17cb3c0a20832d1b157caadc90b3ccbb578d1e71b7770546a8a" exitCode=0 Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.886845 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerDied","Data":"ac586c97f3e4e17cb3c0a20832d1b157caadc90b3ccbb578d1e71b7770546a8a"} Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.890786 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d83360-7a65-47b3-98df-42902962da8d" containerID="020d57f9dcefc7a806ce9c2910a2cbf1097007d57f04c2232488c6d3125f10b9" exitCode=0 Jan 21 06:40:50 crc kubenswrapper[4913]: I0121 06:40:50.890933 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerDied","Data":"020d57f9dcefc7a806ce9c2910a2cbf1097007d57f04c2232488c6d3125f10b9"} Jan 21 06:40:51 crc kubenswrapper[4913]: I0121 06:40:51.903137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pp6lf" event={"ID":"4fd9a0ea-0344-4e90-87f0-34a568804f80","Type":"ContainerStarted","Data":"bc9f718296dd53b25974786ace8bf59b36fd595a623e7bbabfbb20de0b41f833"} Jan 21 06:40:51 crc kubenswrapper[4913]: I0121 06:40:51.932012 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pp6lf" podStartSLOduration=2.483261757 podStartE2EDuration="4.9319895s" podCreationTimestamp="2026-01-21 06:40:47 +0000 UTC" firstStartedPulling="2026-01-21 06:40:48.859377048 +0000 UTC m=+338.655736771" lastFinishedPulling="2026-01-21 06:40:51.308104801 +0000 UTC m=+341.104464514" observedRunningTime="2026-01-21 06:40:51.931243648 +0000 UTC m=+341.727603351" watchObservedRunningTime="2026-01-21 06:40:51.9319895 +0000 UTC m=+341.728349213" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.613453 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.613733 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.673252 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.922536 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-frpd4" event={"ID":"b6d83360-7a65-47b3-98df-42902962da8d","Type":"ContainerStarted","Data":"05e87eff78c4ffb20e0b35bfc633025ab1fc926ebaf65ee4c40e216e1b85ce5e"} Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.942092 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-frpd4" podStartSLOduration=3.781829863 podStartE2EDuration="8.942075734s" podCreationTimestamp="2026-01-21 06:40:46 +0000 UTC" firstStartedPulling="2026-01-21 06:40:48.860422637 +0000 UTC m=+338.656782310" lastFinishedPulling="2026-01-21 06:40:54.020668508 +0000 UTC m=+343.817028181" observedRunningTime="2026-01-21 06:40:54.942020833 +0000 UTC m=+344.738380506" watchObservedRunningTime="2026-01-21 06:40:54.942075734 +0000 UTC m=+344.738435407" Jan 21 06:40:54 crc kubenswrapper[4913]: I0121 06:40:54.974261 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rp8wd" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.264662 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.264996 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.322787 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:55 crc kubenswrapper[4913]: I0121 06:40:55.986137 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tmk45" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.307949 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.308672 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.352752 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.632298 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:57 crc kubenswrapper[4913]: I0121 06:40:57.632366 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:40:58 crc kubenswrapper[4913]: I0121 06:40:58.674523 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pp6lf" podUID="4fd9a0ea-0344-4e90-87f0-34a568804f80" containerName="registry-server" probeResult="failure" output=< Jan 21 06:40:58 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:40:58 crc kubenswrapper[4913]: > Jan 21 06:41:07 crc kubenswrapper[4913]: I0121 06:41:07.371437 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-frpd4" Jan 21 06:41:07 crc kubenswrapper[4913]: I0121 06:41:07.707551 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:41:07 crc kubenswrapper[4913]: I0121 06:41:07.781085 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pp6lf" Jan 21 06:41:08 crc kubenswrapper[4913]: I0121 06:41:08.319382 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:41:08 crc kubenswrapper[4913]: I0121 06:41:08.319781 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:41:19 crc kubenswrapper[4913]: I0121 06:41:19.390809 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:41:19 crc kubenswrapper[4913]: I0121 06:41:19.392562 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" containerID="cri-o://f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240" gracePeriod=30 Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.106536 4913 generic.go:334] "Generic (PLEG): container finished" podID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerID="f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240" exitCode=0 Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.106648 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerDied","Data":"f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240"} Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.341627 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512189 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512251 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512278 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512368 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") pod \"799094d2-a718-4044-b16d-8a011cc3ecaa\" (UID: \"799094d2-a718-4044-b16d-8a011cc3ecaa\") " Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.512985 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.513000 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca" (OuterVolumeSpecName: "client-ca") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.513413 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config" (OuterVolumeSpecName: "config") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.519000 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn" (OuterVolumeSpecName: "kube-api-access-bn8zn") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "kube-api-access-bn8zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.519847 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "799094d2-a718-4044-b16d-8a011cc3ecaa" (UID: "799094d2-a718-4044-b16d-8a011cc3ecaa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613658 4913 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613698 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn8zn\" (UniqueName: \"kubernetes.io/projected/799094d2-a718-4044-b16d-8a011cc3ecaa-kube-api-access-bn8zn\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613721 4913 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613735 4913 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799094d2-a718-4044-b16d-8a011cc3ecaa-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.613750 4913 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799094d2-a718-4044-b16d-8a011cc3ecaa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.614962 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8"] Jan 21 06:41:20 crc kubenswrapper[4913]: E0121 06:41:20.615317 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.615344 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.615510 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" containerName="controller-manager" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.616099 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.619703 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8"] Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.714630 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cccfc082-b6a7-4769-aea5-9fe750c1f724-serving-cert\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.714872 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/cccfc082-b6a7-4769-aea5-9fe750c1f724-kube-api-access-kxghl\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.714897 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-proxy-ca-bundles\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.715018 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-config\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.715069 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-client-ca\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816618 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cccfc082-b6a7-4769-aea5-9fe750c1f724-serving-cert\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816664 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/cccfc082-b6a7-4769-aea5-9fe750c1f724-kube-api-access-kxghl\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816683 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-proxy-ca-bundles\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816711 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-config\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.816741 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-client-ca\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.817472 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-client-ca\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.818820 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-proxy-ca-bundles\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.820117 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cccfc082-b6a7-4769-aea5-9fe750c1f724-config\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.824966 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cccfc082-b6a7-4769-aea5-9fe750c1f724-serving-cert\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.837341 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxghl\" (UniqueName: \"kubernetes.io/projected/cccfc082-b6a7-4769-aea5-9fe750c1f724-kube-api-access-kxghl\") pod \"controller-manager-57f77c6dd6-dd8k8\" (UID: \"cccfc082-b6a7-4769-aea5-9fe750c1f724\") " pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:20 crc kubenswrapper[4913]: I0121 06:41:20.943455 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.114044 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" event={"ID":"799094d2-a718-4044-b16d-8a011cc3ecaa","Type":"ContainerDied","Data":"31556e60258212c370c8931267e50827df2965a866ba8beef15717c83d5b56e5"} Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.114114 4913 scope.go:117] "RemoveContainer" containerID="f35fa3200dbe9c898b5e2c2080a410b5da87743837e3a9c9c8d1582776ad2240" Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.114117 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd9d8574d-grvnz" Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.128257 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8"] Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.145408 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:41:21 crc kubenswrapper[4913]: I0121 06:41:21.157487 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bd9d8574d-grvnz"] Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.129585 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" event={"ID":"cccfc082-b6a7-4769-aea5-9fe750c1f724","Type":"ContainerStarted","Data":"dfe2887aa4939906fe8164648933b1dbe6d39d785e088d17d90af9eb91805d11"} Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.130802 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.130867 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" event={"ID":"cccfc082-b6a7-4769-aea5-9fe750c1f724","Type":"ContainerStarted","Data":"49ea9acf3d77cd00bcf44e38b92df7e751dba25ec69a636302dce0b2f1a5f3e1"} Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.137356 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.152821 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57f77c6dd6-dd8k8" podStartSLOduration=3.152791052 podStartE2EDuration="3.152791052s" podCreationTimestamp="2026-01-21 06:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:41:22.149997514 +0000 UTC m=+371.946357217" watchObservedRunningTime="2026-01-21 06:41:22.152791052 +0000 UTC m=+371.949150725" Jan 21 06:41:22 crc kubenswrapper[4913]: I0121 06:41:22.553806 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799094d2-a718-4044-b16d-8a011cc3ecaa" path="/var/lib/kubelet/pods/799094d2-a718-4044-b16d-8a011cc3ecaa/volumes" Jan 21 06:41:28 crc kubenswrapper[4913]: I0121 06:41:28.894556 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-npfqd"] Jan 21 06:41:28 crc kubenswrapper[4913]: I0121 06:41:28.895941 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:28 crc kubenswrapper[4913]: I0121 06:41:28.911801 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-npfqd"] Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.037910 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eb761e-bfd8-435e-b2d4-e269953c3140-ca-trust-extracted\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.037959 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-trusted-ca\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.037977 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-certificates\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038001 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-bound-sa-token\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038071 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eb761e-bfd8-435e-b2d4-e269953c3140-installation-pull-secrets\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038131 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8htm\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-kube-api-access-l8htm\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038170 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.038230 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-tls\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.071054 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.139729 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-tls\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140118 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eb761e-bfd8-435e-b2d4-e269953c3140-ca-trust-extracted\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140149 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-trusted-ca\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140176 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-certificates\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140226 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-bound-sa-token\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140261 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eb761e-bfd8-435e-b2d4-e269953c3140-installation-pull-secrets\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.140286 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8htm\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-kube-api-access-l8htm\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.142980 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/73eb761e-bfd8-435e-b2d4-e269953c3140-ca-trust-extracted\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.143113 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-trusted-ca\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.146424 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-certificates\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.154555 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/73eb761e-bfd8-435e-b2d4-e269953c3140-installation-pull-secrets\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.156848 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-registry-tls\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.158935 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-bound-sa-token\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.161571 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8htm\" (UniqueName: \"kubernetes.io/projected/73eb761e-bfd8-435e-b2d4-e269953c3140-kube-api-access-l8htm\") pod \"image-registry-66df7c8f76-npfqd\" (UID: \"73eb761e-bfd8-435e-b2d4-e269953c3140\") " pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.552901 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:29 crc kubenswrapper[4913]: I0121 06:41:29.978346 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-npfqd"] Jan 21 06:41:30 crc kubenswrapper[4913]: I0121 06:41:30.182444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" event={"ID":"73eb761e-bfd8-435e-b2d4-e269953c3140","Type":"ContainerStarted","Data":"660733956d52888c6da1b62bdf40d451311e4fe122454da02e4123731cd0a198"} Jan 21 06:41:31 crc kubenswrapper[4913]: I0121 06:41:31.189954 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" event={"ID":"73eb761e-bfd8-435e-b2d4-e269953c3140","Type":"ContainerStarted","Data":"e094d0e5cf925a7295919c4da401cf1aaf5c48fa7374e1c76275b063a6c4c536"} Jan 21 06:41:31 crc kubenswrapper[4913]: I0121 06:41:31.190691 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:31 crc kubenswrapper[4913]: I0121 06:41:31.212875 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" podStartSLOduration=3.212854165 podStartE2EDuration="3.212854165s" podCreationTimestamp="2026-01-21 06:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:41:31.211156298 +0000 UTC m=+381.007515981" watchObservedRunningTime="2026-01-21 06:41:31.212854165 +0000 UTC m=+381.009213858" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.319343 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.320119 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.320193 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.321048 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:41:38 crc kubenswrapper[4913]: I0121 06:41:38.321135 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3" gracePeriod=600 Jan 21 06:41:39 crc kubenswrapper[4913]: I0121 06:41:39.250254 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3" exitCode=0 Jan 21 06:41:39 crc kubenswrapper[4913]: I0121 06:41:39.250365 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3"} Jan 21 06:41:39 crc kubenswrapper[4913]: I0121 06:41:39.251034 4913 scope.go:117] "RemoveContainer" containerID="d6751a4e236aa6776045ff9130eee2c33d8339d723cae8dcf68970eb582fd355" Jan 21 06:41:40 crc kubenswrapper[4913]: I0121 06:41:40.258751 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832"} Jan 21 06:41:49 crc kubenswrapper[4913]: I0121 06:41:49.558511 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-npfqd" Jan 21 06:41:49 crc kubenswrapper[4913]: I0121 06:41:49.613538 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:42:14 crc kubenswrapper[4913]: I0121 06:42:14.654883 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" containerID="cri-o://7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" gracePeriod=30 Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.079848 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171434 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171505 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171534 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171626 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171656 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171710 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171889 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.171927 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") pod \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\" (UID: \"f46fd64f-46cb-4464-8f26-6df55bf77ba1\") " Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.172688 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.173416 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.173809 4913 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.173878 4913 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.178260 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.178967 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.179635 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w" (OuterVolumeSpecName: "kube-api-access-kwm6w") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "kube-api-access-kwm6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.179658 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.185682 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.197093 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f46fd64f-46cb-4464-8f26-6df55bf77ba1" (UID: "f46fd64f-46cb-4464-8f26-6df55bf77ba1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.275288 4913 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.275951 4913 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f46fd64f-46cb-4464-8f26-6df55bf77ba1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.276003 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwm6w\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-kube-api-access-kwm6w\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.276033 4913 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f46fd64f-46cb-4464-8f26-6df55bf77ba1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.276049 4913 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f46fd64f-46cb-4464-8f26-6df55bf77ba1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496043 4913 generic.go:334] "Generic (PLEG): container finished" podID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" exitCode=0 Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496126 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerDied","Data":"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4"} Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496497 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" event={"ID":"f46fd64f-46cb-4464-8f26-6df55bf77ba1","Type":"ContainerDied","Data":"608586ddc5451cc666f70963a96a052acebcbef086316fbb9184345cbc03f7b5"} Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496533 4913 scope.go:117] "RemoveContainer" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.496148 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-78wqc" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.520316 4913 scope.go:117] "RemoveContainer" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" Jan 21 06:42:15 crc kubenswrapper[4913]: E0121 06:42:15.520883 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4\": container with ID starting with 7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4 not found: ID does not exist" containerID="7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.520921 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4"} err="failed to get container status \"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4\": rpc error: code = NotFound desc = could not find container \"7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4\": container with ID starting with 7d10a480e25ec5153471994faedc3bf12935ac852d05a4c31e8c7425ca886ef4 not found: ID does not exist" Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.545262 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:42:15 crc kubenswrapper[4913]: I0121 06:42:15.556208 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-78wqc"] Jan 21 06:42:16 crc kubenswrapper[4913]: I0121 06:42:16.534990 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" path="/var/lib/kubelet/pods/f46fd64f-46cb-4464-8f26-6df55bf77ba1/volumes" Jan 21 06:44:08 crc kubenswrapper[4913]: I0121 06:44:08.319267 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:44:08 crc kubenswrapper[4913]: I0121 06:44:08.320026 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:44:38 crc kubenswrapper[4913]: I0121 06:44:38.319420 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:44:38 crc kubenswrapper[4913]: I0121 06:44:38.320216 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.196579 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn"] Jan 21 06:45:00 crc kubenswrapper[4913]: E0121 06:45:00.197307 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.197321 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.197445 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46fd64f-46cb-4464-8f26-6df55bf77ba1" containerName="registry" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.197890 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.201342 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.201859 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.211716 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.212088 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.212227 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.226311 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn"] Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.312877 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.313059 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.313105 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.314743 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.331134 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.343276 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"collect-profiles-29482965-9g4sn\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.536260 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:00 crc kubenswrapper[4913]: I0121 06:45:00.807067 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn"] Jan 21 06:45:01 crc kubenswrapper[4913]: I0121 06:45:01.638306 4913 generic.go:334] "Generic (PLEG): container finished" podID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerID="4f2dab9db916ff8d0519c5eaa81d06419025976d84bd653776e58f5e8a4c59bf" exitCode=0 Jan 21 06:45:01 crc kubenswrapper[4913]: I0121 06:45:01.638412 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" event={"ID":"59238b39-5be3-4531-9d3d-7d3b89d2c394","Type":"ContainerDied","Data":"4f2dab9db916ff8d0519c5eaa81d06419025976d84bd653776e58f5e8a4c59bf"} Jan 21 06:45:01 crc kubenswrapper[4913]: I0121 06:45:01.638692 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" event={"ID":"59238b39-5be3-4531-9d3d-7d3b89d2c394","Type":"ContainerStarted","Data":"6b5a51c420191e3d068d4e126a9193361f47d211f512c80ee9eeaffdaea068af"} Jan 21 06:45:02 crc kubenswrapper[4913]: I0121 06:45:02.946378 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.145334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") pod \"59238b39-5be3-4531-9d3d-7d3b89d2c394\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.145435 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") pod \"59238b39-5be3-4531-9d3d-7d3b89d2c394\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.145515 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") pod \"59238b39-5be3-4531-9d3d-7d3b89d2c394\" (UID: \"59238b39-5be3-4531-9d3d-7d3b89d2c394\") " Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.146915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume" (OuterVolumeSpecName: "config-volume") pod "59238b39-5be3-4531-9d3d-7d3b89d2c394" (UID: "59238b39-5be3-4531-9d3d-7d3b89d2c394"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.152317 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59238b39-5be3-4531-9d3d-7d3b89d2c394" (UID: "59238b39-5be3-4531-9d3d-7d3b89d2c394"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.152882 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc" (OuterVolumeSpecName: "kube-api-access-9c9wc") pod "59238b39-5be3-4531-9d3d-7d3b89d2c394" (UID: "59238b39-5be3-4531-9d3d-7d3b89d2c394"). InnerVolumeSpecName "kube-api-access-9c9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.247901 4913 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59238b39-5be3-4531-9d3d-7d3b89d2c394-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.247957 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c9wc\" (UniqueName: \"kubernetes.io/projected/59238b39-5be3-4531-9d3d-7d3b89d2c394-kube-api-access-9c9wc\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.247976 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59238b39-5be3-4531-9d3d-7d3b89d2c394-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.656691 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" event={"ID":"59238b39-5be3-4531-9d3d-7d3b89d2c394","Type":"ContainerDied","Data":"6b5a51c420191e3d068d4e126a9193361f47d211f512c80ee9eeaffdaea068af"} Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.656767 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5a51c420191e3d068d4e126a9193361f47d211f512c80ee9eeaffdaea068af" Jan 21 06:45:03 crc kubenswrapper[4913]: I0121 06:45:03.656892 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482965-9g4sn" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.319532 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.319976 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.320037 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.320911 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.320995 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832" gracePeriod=600 Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695513 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832" exitCode=0 Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695575 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832"} Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695905 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3"} Jan 21 06:45:08 crc kubenswrapper[4913]: I0121 06:45:08.695934 4913 scope.go:117] "RemoveContainer" containerID="a9a0d7f92e1ba661738bb80d2aff2afeda7674c7a8aec1c1649a1b8affcc4dd3" Jan 21 06:45:10 crc kubenswrapper[4913]: I0121 06:45:10.737083 4913 scope.go:117] "RemoveContainer" containerID="43add6da62b377541babb732cc3e9566b8a93ef426ada2deaad87cd9ee4e97bb" Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.947854 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952467 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" containerID="cri-o://6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952842 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" containerID="cri-o://d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952879 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" containerID="cri-o://19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.952909 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" containerID="cri-o://f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.953163 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.953193 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" containerID="cri-o://1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" gracePeriod=30 Jan 21 06:45:28 crc kubenswrapper[4913]: I0121 06:45:28.953225 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" containerID="cri-o://5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" gracePeriod=30 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.004266 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" containerID="cri-o://e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" gracePeriod=30 Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.077300 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.077820 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.078439 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.078477 4913 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.079853 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.081467 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.082369 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.082403 4913 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.225452 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.227983 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-acl-logging/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.228460 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-controller/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.229021 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278137 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v5kmt"] Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278318 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278329 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278339 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kubecfg-setup" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278344 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kubecfg-setup" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278353 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278360 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278369 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278376 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278389 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278396 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278402 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerName="collect-profiles" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278409 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerName="collect-profiles" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278417 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278423 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278432 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278438 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278444 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278450 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278458 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278463 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278470 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278475 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278481 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278488 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278565 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278574 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="sbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278583 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-node" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278640 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278648 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="northd" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278655 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="nbdb" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278662 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278670 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278676 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="59238b39-5be3-4531-9d3d-7d3b89d2c394" containerName="collect-profiles" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278685 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278695 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278704 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovn-acl-logging" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278789 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278796 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278884 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.278962 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.278969 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" containerName="ovnkube-controller" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.281204 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.400701 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.400827 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.400930 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401041 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401083 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401117 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401230 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401301 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401343 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401362 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401394 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401413 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401433 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401430 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401454 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401475 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401473 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401533 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401628 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log" (OuterVolumeSpecName: "node-log") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401649 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401964 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401656 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.401908 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402053 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402073 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402114 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket" (OuterVolumeSpecName: "log-socket") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402140 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402200 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402212 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402174 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402303 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402331 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402372 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") pod \"afe1e161-7227-48ff-824e-01d26e5c7218\" (UID: \"afe1e161-7227-48ff-824e-01d26e5c7218\") " Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402468 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402561 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash" (OuterVolumeSpecName: "host-slash") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402575 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-netd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402583 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402658 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-script-lib\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402687 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c639508-4011-41af-8cb2-17be3ad6062c-ovn-node-metrics-cert\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402722 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-etc-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402742 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-env-overrides\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402768 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-kubelet\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402833 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-config\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402873 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-netns\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402909 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402941 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbj4\" (UniqueName: \"kubernetes.io/projected/0c639508-4011-41af-8cb2-17be3ad6062c-kube-api-access-zgbj4\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.402962 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-systemd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403008 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-node-log\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403036 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-systemd-units\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403068 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-ovn\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403099 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403167 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403283 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-slash\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403312 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-bin\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403344 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-log-socket\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403368 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-var-lib-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403421 4913 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403438 4913 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403452 4913 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403466 4913 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403477 4913 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403488 4913 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403500 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403511 4913 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403522 4913 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403534 4913 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403544 4913 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403558 4913 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403570 4913 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403582 4913 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/afe1e161-7227-48ff-824e-01d26e5c7218-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403612 4913 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403625 4913 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.403637 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.407095 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229" (OuterVolumeSpecName: "kube-api-access-j8229") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "kube-api-access-j8229". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.407167 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.414262 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "afe1e161-7227-48ff-824e-01d26e5c7218" (UID: "afe1e161-7227-48ff-824e-01d26e5c7218"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505492 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505549 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-systemd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505574 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbj4\" (UniqueName: \"kubernetes.io/projected/0c639508-4011-41af-8cb2-17be3ad6062c-kube-api-access-zgbj4\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505644 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-node-log\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505670 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-systemd-units\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505692 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-ovn\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505715 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505740 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505767 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-slash\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-bin\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505811 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-var-lib-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505829 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-log-socket\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505854 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-netd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505872 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-script-lib\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505891 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c639508-4011-41af-8cb2-17be3ad6062c-ovn-node-metrics-cert\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505916 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-kubelet\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505935 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-etc-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505952 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-env-overrides\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.505988 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-config\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506010 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-netns\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506063 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8229\" (UniqueName: \"kubernetes.io/projected/afe1e161-7227-48ff-824e-01d26e5c7218-kube-api-access-j8229\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506074 4913 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/afe1e161-7227-48ff-824e-01d26e5c7218-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506086 4913 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/afe1e161-7227-48ff-824e-01d26e5c7218-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506132 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-netns\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506175 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506201 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-systemd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506554 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-node-log\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506612 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-systemd-units\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506644 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-ovn\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506672 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-run-ovn-kubernetes\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506705 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-run-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506732 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-slash\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506757 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-bin\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506783 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-var-lib-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506812 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-log-socket\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.506838 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-cni-netd\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.507576 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-script-lib\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.510813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c639508-4011-41af-8cb2-17be3ad6062c-ovn-node-metrics-cert\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.510897 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-host-kubelet\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.510939 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c639508-4011-41af-8cb2-17be3ad6062c-etc-openvswitch\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.511406 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-env-overrides\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.512171 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c639508-4011-41af-8cb2-17be3ad6062c-ovnkube-config\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.523979 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbj4\" (UniqueName: \"kubernetes.io/projected/0c639508-4011-41af-8cb2-17be3ad6062c-kube-api-access-zgbj4\") pod \"ovnkube-node-v5kmt\" (UID: \"0c639508-4011-41af-8cb2-17be3ad6062c\") " pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.600378 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.813908 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovnkube-controller/3.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.816338 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-acl-logging/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.816900 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-c7xtt_afe1e161-7227-48ff-824e-01d26e5c7218/ovn-controller/0.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817278 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817308 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817318 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817328 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817337 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817345 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817352 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" exitCode=143 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817361 4913 generic.go:334] "Generic (PLEG): container finished" podID="afe1e161-7227-48ff-824e-01d26e5c7218" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" exitCode=143 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817394 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817431 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817446 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817459 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817473 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817491 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817503 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817516 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817523 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817530 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817539 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817547 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817554 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817560 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817567 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817505 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817576 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817536 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.817797 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818356 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818370 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818384 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818393 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818403 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818413 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818422 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818432 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818441 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818820 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818862 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818875 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818886 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818897 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818907 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818918 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818928 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818938 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818947 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.818956 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819243 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c7xtt" event={"ID":"afe1e161-7227-48ff-824e-01d26e5c7218","Type":"ContainerDied","Data":"2833783e18704972728c468ce917bd320a34d8e1f9fbe5476ad9edc9fb8db6c8"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819273 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819286 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819298 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819309 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819319 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819329 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819339 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819348 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819359 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819370 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819489 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerDied","Data":"6d3afdae4860943b42d4b2b72247b6b2eecab2dd6155e56e7e96c02f450df3f7"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819355 4913 generic.go:334] "Generic (PLEG): container finished" podID="0c639508-4011-41af-8cb2-17be3ad6062c" containerID="6d3afdae4860943b42d4b2b72247b6b2eecab2dd6155e56e7e96c02f450df3f7" exitCode=0 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.819630 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"0c4a2914541f79ea25f1ce921d3b7652030d36cfe969ec4d4fe7617ec7d1ed27"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.821674 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822453 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822517 4913 generic.go:334] "Generic (PLEG): container finished" podID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" exitCode=2 Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822553 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerDied","Data":"35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.822577 4913 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6"} Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.823494 4913 scope.go:117] "RemoveContainer" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" Jan 21 06:45:29 crc kubenswrapper[4913]: E0121 06:45:29.824023 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gn6lz_openshift-multus(b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf)\"" pod="openshift-multus/multus-gn6lz" podUID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.846998 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.879246 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.907396 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.914414 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.921801 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c7xtt"] Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.937888 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.957497 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:29 crc kubenswrapper[4913]: I0121 06:45:29.973307 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.023783 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.039773 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.056129 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.073557 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.074040 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074082 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074108 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.074439 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074468 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.074487 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.075562 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075613 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075633 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.075865 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075896 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.075913 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076092 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076118 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076134 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076301 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076326 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076342 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076488 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076509 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076523 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076675 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076698 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076716 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.076904 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076931 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.076946 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: E0121 06:45:30.077181 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077211 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077227 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077486 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077526 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077964 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.077990 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078187 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078221 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078460 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078479 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078807 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.078835 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079043 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079067 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079264 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079288 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079735 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.079756 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.080783 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.080811 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081109 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081129 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081377 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081430 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081727 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081748 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.081985 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082003 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082194 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082210 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082439 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082457 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082727 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.082758 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083010 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083032 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083235 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083252 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083602 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083621 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083876 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.083895 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084176 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084195 4913 scope.go:117] "RemoveContainer" containerID="34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084897 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42"} err="failed to get container status \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": rpc error: code = NotFound desc = could not find container \"34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42\": container with ID starting with 34a38332eb328dc059957c5e4995a7f5dc8d332a72fa16d5dd9e5180c6d03c42 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.084925 4913 scope.go:117] "RemoveContainer" containerID="d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085176 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095"} err="failed to get container status \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": rpc error: code = NotFound desc = could not find container \"d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095\": container with ID starting with d867b4c08da0165b79b13698fe37181953243a3a37ce397087eee92c73c74095 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085198 4913 scope.go:117] "RemoveContainer" containerID="19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085654 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce"} err="failed to get container status \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": rpc error: code = NotFound desc = could not find container \"19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce\": container with ID starting with 19316fb452f85b31e01d97fb5d023e22d6ed091ebf8a4e54d920b3b436a9b4ce not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.085673 4913 scope.go:117] "RemoveContainer" containerID="f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086021 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c"} err="failed to get container status \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": rpc error: code = NotFound desc = could not find container \"f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c\": container with ID starting with f065517e068d6f86c57ca8dd70b74c673049890d28dab92209eba55043a5128c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086039 4913 scope.go:117] "RemoveContainer" containerID="54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086381 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3"} err="failed to get container status \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": rpc error: code = NotFound desc = could not find container \"54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3\": container with ID starting with 54cff962784424a318c92d285ed546d9765575283f83c3fe6e7022b32ed576a3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086401 4913 scope.go:117] "RemoveContainer" containerID="1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086723 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0"} err="failed to get container status \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": rpc error: code = NotFound desc = could not find container \"1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0\": container with ID starting with 1f46ebf235fd679713867b696b36e8c0c46b1368a07196516f28b7a7cd9746c0 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086741 4913 scope.go:117] "RemoveContainer" containerID="5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086963 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4"} err="failed to get container status \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": rpc error: code = NotFound desc = could not find container \"5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4\": container with ID starting with 5424dcd4f0c44d9678d2efd00272a4fb2e5566f295cd976e5d5b85c42f8b38e4 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.086979 4913 scope.go:117] "RemoveContainer" containerID="6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087221 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94"} err="failed to get container status \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": rpc error: code = NotFound desc = could not find container \"6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94\": container with ID starting with 6aed870ccd89b8777a6ec6c0b1c4124da76fe28d396db025789e19aca0f59b94 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087238 4913 scope.go:117] "RemoveContainer" containerID="7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087523 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3"} err="failed to get container status \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": rpc error: code = NotFound desc = could not find container \"7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3\": container with ID starting with 7d497ea00a26849157e30c835b86f05b9d4b9dca8515521cc1f21d92f8b57bd3 not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087540 4913 scope.go:117] "RemoveContainer" containerID="e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.087862 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c"} err="failed to get container status \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": rpc error: code = NotFound desc = could not find container \"e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c\": container with ID starting with e595ac391c8f632d42bcba691621c9b0cecec785e3010155cb18fb1b25f44f2c not found: ID does not exist" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.545404 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe1e161-7227-48ff-824e-01d26e5c7218" path="/var/lib/kubelet/pods/afe1e161-7227-48ff-824e-01d26e5c7218/volumes" Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832648 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"9040624e31a6f981c383ebce3506cd50ea61698985e6cca91056b0a30aa01438"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832733 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"d8fa4c98d5ac3834bd30f130843d573cad0414f319bde25e3962962e22b9f5e9"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832761 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"84e4a4905f19f321b3ef43e2e792ad413e168c773a295bb5bc138227d9970d6b"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832785 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"7d7956d8f9537bbf32a6f2a8891c8dc27703946d987485c0cce511e12adb6ede"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832809 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"1f919245e12fcc4250d0bc7562028125d0807571ff382b096b28abff3c2c6597"} Jan 21 06:45:30 crc kubenswrapper[4913]: I0121 06:45:30.832832 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"c0a637daf2c1b4c12155b2e87b075cd948deaad0e4fdc7645135fcf0ebf742e6"} Jan 21 06:45:33 crc kubenswrapper[4913]: I0121 06:45:33.855799 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"3ab535b5da63a8a36227de3f62ca5ebf05226eb66b99fdb16430fc6d602cc89a"} Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.868332 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" event={"ID":"0c639508-4011-41af-8cb2-17be3ad6062c","Type":"ContainerStarted","Data":"6bef4975678b8df7e304e0c0fc594b605abc6867733664f5e484d3f7bee035ba"} Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.869080 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.869145 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.869250 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.895781 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.895880 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" podStartSLOduration=6.895861772 podStartE2EDuration="6.895861772s" podCreationTimestamp="2026-01-21 06:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:45:35.89434434 +0000 UTC m=+625.690704023" watchObservedRunningTime="2026-01-21 06:45:35.895861772 +0000 UTC m=+625.692221445" Jan 21 06:45:35 crc kubenswrapper[4913]: I0121 06:45:35.896116 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:45:45 crc kubenswrapper[4913]: I0121 06:45:45.526916 4913 scope.go:117] "RemoveContainer" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" Jan 21 06:45:45 crc kubenswrapper[4913]: E0121 06:45:45.528102 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gn6lz_openshift-multus(b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf)\"" pod="openshift-multus/multus-gn6lz" podUID="b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.324477 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg"] Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.327180 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.330303 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.338299 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg"] Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.483214 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.483296 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.483391 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.526840 4913 scope.go:117] "RemoveContainer" containerID="35babac105f6583b573111491729f92109bcb54b7a16fc5739e17df46ec6cc70" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.584299 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.584650 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.584709 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.585202 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.585983 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.614394 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: I0121 06:45:58.642867 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685145 4913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685235 4913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685258 4913 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:58 crc kubenswrapper[4913]: E0121 06:45:58.685315 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(c49ac2a11096d5b2b5b5d0b0477435db90b2dda00491e46a7942a4629387d060): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.039644 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040164 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/1.log" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040231 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gn6lz" event={"ID":"b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf","Type":"ContainerStarted","Data":"18d40612f6a3d6a699298b285688f6c7574c39b9f576da43152e3f7778531a36"} Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040246 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.040755 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062144 4913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062218 4913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062246 4913 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:45:59 crc kubenswrapper[4913]: E0121 06:45:59.062293 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace(6bd2ad61-8bab-42d9-a09c-cf48255cc25c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_openshift-marketplace_6bd2ad61-8bab-42d9-a09c-cf48255cc25c_0(2c496d3d09bd8422601b12a1d852b82d2136ae139b7a48ff92cdf58c4d6eacc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" Jan 21 06:45:59 crc kubenswrapper[4913]: I0121 06:45:59.626778 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v5kmt" Jan 21 06:46:10 crc kubenswrapper[4913]: I0121 06:46:10.788278 4913 scope.go:117] "RemoveContainer" containerID="f947e88492ed4402a375f8ddd8107048ba234b0c00f34fa72e3fc5eba93312a6" Jan 21 06:46:12 crc kubenswrapper[4913]: I0121 06:46:12.128539 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:46:13 crc kubenswrapper[4913]: I0121 06:46:13.525804 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:13 crc kubenswrapper[4913]: I0121 06:46:13.526408 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:14 crc kubenswrapper[4913]: I0121 06:46:14.015340 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg"] Jan 21 06:46:14 crc kubenswrapper[4913]: I0121 06:46:14.144804 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerStarted","Data":"6bb2f0038e20a1e277f2b5add84a0b8c859fc34895666d4599568b9186f8fcb2"} Jan 21 06:46:15 crc kubenswrapper[4913]: I0121 06:46:15.153372 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerID="1fb95c7ac4eac0eb6167e3ebacdca54c3011f69b114dec155978458b577f1bdc" exitCode=0 Jan 21 06:46:15 crc kubenswrapper[4913]: I0121 06:46:15.153496 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"1fb95c7ac4eac0eb6167e3ebacdca54c3011f69b114dec155978458b577f1bdc"} Jan 21 06:46:15 crc kubenswrapper[4913]: I0121 06:46:15.155227 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:46:17 crc kubenswrapper[4913]: I0121 06:46:17.169370 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerID="030083572c4fca58b7011028c8a3a63e0e0fb2bd13336495217780d023d18a12" exitCode=0 Jan 21 06:46:17 crc kubenswrapper[4913]: I0121 06:46:17.169489 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"030083572c4fca58b7011028c8a3a63e0e0fb2bd13336495217780d023d18a12"} Jan 21 06:46:18 crc kubenswrapper[4913]: I0121 06:46:18.180185 4913 generic.go:334] "Generic (PLEG): container finished" podID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerID="707eec87cf656b6427ef516b433aadd6cc2ae8aa4a9a1c826213c56a11f82258" exitCode=0 Jan 21 06:46:18 crc kubenswrapper[4913]: I0121 06:46:18.180252 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"707eec87cf656b6427ef516b433aadd6cc2ae8aa4a9a1c826213c56a11f82258"} Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.467942 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.485238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") pod \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.485284 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") pod \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.485359 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") pod \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\" (UID: \"6bd2ad61-8bab-42d9-a09c-cf48255cc25c\") " Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.486373 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle" (OuterVolumeSpecName: "bundle") pod "6bd2ad61-8bab-42d9-a09c-cf48255cc25c" (UID: "6bd2ad61-8bab-42d9-a09c-cf48255cc25c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.491199 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl" (OuterVolumeSpecName: "kube-api-access-w69rl") pod "6bd2ad61-8bab-42d9-a09c-cf48255cc25c" (UID: "6bd2ad61-8bab-42d9-a09c-cf48255cc25c"). InnerVolumeSpecName "kube-api-access-w69rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.499605 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util" (OuterVolumeSpecName: "util") pod "6bd2ad61-8bab-42d9-a09c-cf48255cc25c" (UID: "6bd2ad61-8bab-42d9-a09c-cf48255cc25c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.586353 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69rl\" (UniqueName: \"kubernetes.io/projected/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-kube-api-access-w69rl\") on node \"crc\" DevicePath \"\"" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.586383 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:46:19 crc kubenswrapper[4913]: I0121 06:46:19.586393 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd2ad61-8bab-42d9-a09c-cf48255cc25c-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:46:20 crc kubenswrapper[4913]: I0121 06:46:20.194807 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" event={"ID":"6bd2ad61-8bab-42d9-a09c-cf48255cc25c","Type":"ContainerDied","Data":"6bb2f0038e20a1e277f2b5add84a0b8c859fc34895666d4599568b9186f8fcb2"} Jan 21 06:46:20 crc kubenswrapper[4913]: I0121 06:46:20.194903 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb2f0038e20a1e277f2b5add84a0b8c859fc34895666d4599568b9186f8fcb2" Jan 21 06:46:20 crc kubenswrapper[4913]: I0121 06:46:20.194865 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.594407 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r"] Jan 21 06:46:31 crc kubenswrapper[4913]: E0121 06:46:31.595284 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="util" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595301 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="util" Jan 21 06:46:31 crc kubenswrapper[4913]: E0121 06:46:31.595323 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="pull" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595331 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="pull" Jan 21 06:46:31 crc kubenswrapper[4913]: E0121 06:46:31.595353 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="extract" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595360 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="extract" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595466 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd2ad61-8bab-42d9-a09c-cf48255cc25c" containerName="extract" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.595965 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.597624 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.598290 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.598553 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.598651 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.599836 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ch447" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.616181 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r"] Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.640783 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-apiservice-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.640850 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-webhook-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.640879 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5h6\" (UniqueName: \"kubernetes.io/projected/e89d9462-a010-4873-9a7a-ff85114b35f9-kube-api-access-bp5h6\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.741374 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-apiservice-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.741455 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-webhook-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.741474 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5h6\" (UniqueName: \"kubernetes.io/projected/e89d9462-a010-4873-9a7a-ff85114b35f9-kube-api-access-bp5h6\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.748171 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-webhook-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.751434 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e89d9462-a010-4873-9a7a-ff85114b35f9-apiservice-cert\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.767375 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5h6\" (UniqueName: \"kubernetes.io/projected/e89d9462-a010-4873-9a7a-ff85114b35f9-kube-api-access-bp5h6\") pod \"metallb-operator-controller-manager-69fc59f99b-jzt7r\" (UID: \"e89d9462-a010-4873-9a7a-ff85114b35f9\") " pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.912229 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.916195 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9"] Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.916930 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.918663 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.918882 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.919004 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-j4q7v" Jan 21 06:46:31 crc kubenswrapper[4913]: I0121 06:46:31.926172 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9"] Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.045217 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-webhook-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.045305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-apiservice-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.045401 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswkw\" (UniqueName: \"kubernetes.io/projected/09278577-df56-4906-b822-79df291100ae-kube-api-access-lswkw\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.146645 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-webhook-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.146982 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-apiservice-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.147007 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswkw\" (UniqueName: \"kubernetes.io/projected/09278577-df56-4906-b822-79df291100ae-kube-api-access-lswkw\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.151253 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-apiservice-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.163637 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09278577-df56-4906-b822-79df291100ae-webhook-cert\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.165262 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswkw\" (UniqueName: \"kubernetes.io/projected/09278577-df56-4906-b822-79df291100ae-kube-api-access-lswkw\") pod \"metallb-operator-webhook-server-6879b6b49c-65nv9\" (UID: \"09278577-df56-4906-b822-79df291100ae\") " pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.209250 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r"] Jan 21 06:46:32 crc kubenswrapper[4913]: W0121 06:46:32.214925 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode89d9462_a010_4873_9a7a_ff85114b35f9.slice/crio-b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b WatchSource:0}: Error finding container b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b: Status 404 returned error can't find the container with id b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.258392 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" event={"ID":"e89d9462-a010-4873-9a7a-ff85114b35f9","Type":"ContainerStarted","Data":"b05e6b6cbfa3858c736d20f9557bc544283ee3dc7789d5d17544710cacdc0b8b"} Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.279250 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:32 crc kubenswrapper[4913]: I0121 06:46:32.468213 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9"] Jan 21 06:46:33 crc kubenswrapper[4913]: I0121 06:46:33.267150 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" event={"ID":"09278577-df56-4906-b822-79df291100ae","Type":"ContainerStarted","Data":"a6c36be76f6a795c1d11f0fce3f6feb92b020c6decec288a4f97d7a9d6afdf24"} Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.292950 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" event={"ID":"e89d9462-a010-4873-9a7a-ff85114b35f9","Type":"ContainerStarted","Data":"c10386d500b2efc304025d02efa7b67410b4849c9abee05e8bb31a6873a1d472"} Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.295396 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" event={"ID":"09278577-df56-4906-b822-79df291100ae","Type":"ContainerStarted","Data":"b658dd5e97e95a3396b54af713b38f3b115c1960238c95cd930c7b83616894f8"} Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.295635 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.315264 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" podStartSLOduration=2.202074755 podStartE2EDuration="7.315248874s" podCreationTimestamp="2026-01-21 06:46:31 +0000 UTC" firstStartedPulling="2026-01-21 06:46:32.216949662 +0000 UTC m=+682.013309335" lastFinishedPulling="2026-01-21 06:46:37.330123771 +0000 UTC m=+687.126483454" observedRunningTime="2026-01-21 06:46:38.311974614 +0000 UTC m=+688.108334287" watchObservedRunningTime="2026-01-21 06:46:38.315248874 +0000 UTC m=+688.111608547" Jan 21 06:46:38 crc kubenswrapper[4913]: I0121 06:46:38.338453 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" podStartSLOduration=2.479985855 podStartE2EDuration="7.338433711s" podCreationTimestamp="2026-01-21 06:46:31 +0000 UTC" firstStartedPulling="2026-01-21 06:46:32.477557057 +0000 UTC m=+682.273916730" lastFinishedPulling="2026-01-21 06:46:37.336004913 +0000 UTC m=+687.132364586" observedRunningTime="2026-01-21 06:46:38.335067489 +0000 UTC m=+688.131427182" watchObservedRunningTime="2026-01-21 06:46:38.338433711 +0000 UTC m=+688.134793404" Jan 21 06:46:39 crc kubenswrapper[4913]: I0121 06:46:39.302864 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:46:52 crc kubenswrapper[4913]: I0121 06:46:52.287760 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6879b6b49c-65nv9" Jan 21 06:47:08 crc kubenswrapper[4913]: I0121 06:47:08.319350 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:47:08 crc kubenswrapper[4913]: I0121 06:47:08.319995 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:47:11 crc kubenswrapper[4913]: I0121 06:47:11.915404 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-69fc59f99b-jzt7r" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.834384 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-zwvdk"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.836887 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839305 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839528 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2ml6m" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839825 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.839887 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.840670 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.842114 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.857433 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.922456 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qpr6d"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.923256 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.927697 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.928035 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.928161 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nmnwm" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.929404 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-pc8gk"] Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.929408 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 06:47:12 crc kubenswrapper[4913]: I0121 06:47:12.946430 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:12.951369 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.006030 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pc8gk"] Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041150 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fda16a07-5908-4736-9835-a29ce1f85a7e-metallb-excludel2\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041214 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9tr\" (UniqueName: \"kubernetes.io/projected/fda16a07-5908-4736-9835-a29ce1f85a7e-kube-api-access-ww9tr\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041249 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f6668fc-0d01-4942-abbe-758690c86480-metrics-certs\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041274 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041299 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f6668fc-0d01-4942-abbe-758690c86480-frr-startup\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041319 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-conf\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041372 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-sockets\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041392 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjxb\" (UniqueName: \"kubernetes.io/projected/1f6668fc-0d01-4942-abbe-758690c86480-kube-api-access-shjxb\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041422 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a33768cf-18ec-4cec-94fb-303b0779eb59-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041455 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-metrics\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041478 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-reloader\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041499 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ngvn\" (UniqueName: \"kubernetes.io/projected/a33768cf-18ec-4cec-94fb-303b0779eb59-kube-api-access-5ngvn\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.041544 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-metrics-certs\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142160 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-cert\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142216 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fda16a07-5908-4736-9835-a29ce1f85a7e-metallb-excludel2\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142245 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9tr\" (UniqueName: \"kubernetes.io/projected/fda16a07-5908-4736-9835-a29ce1f85a7e-kube-api-access-ww9tr\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142271 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f6668fc-0d01-4942-abbe-758690c86480-metrics-certs\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142318 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142338 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f6668fc-0d01-4942-abbe-758690c86480-frr-startup\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142351 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-conf\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142374 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-sockets\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142387 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjxb\" (UniqueName: \"kubernetes.io/projected/1f6668fc-0d01-4942-abbe-758690c86480-kube-api-access-shjxb\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142404 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a33768cf-18ec-4cec-94fb-303b0779eb59-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142425 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-metrics-certs\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142441 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gszl\" (UniqueName: \"kubernetes.io/projected/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-kube-api-access-8gszl\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142460 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-metrics\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142479 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-reloader\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142493 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ngvn\" (UniqueName: \"kubernetes.io/projected/a33768cf-18ec-4cec-94fb-303b0779eb59-kube-api-access-5ngvn\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.142518 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-metrics-certs\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143035 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-conf\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143028 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-frr-sockets\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.143141 4913 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.143192 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist podName:fda16a07-5908-4736-9835-a29ce1f85a7e nodeName:}" failed. No retries permitted until 2026-01-21 06:47:13.643175424 +0000 UTC m=+723.439535097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist") pod "speaker-qpr6d" (UID: "fda16a07-5908-4736-9835-a29ce1f85a7e") : secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143141 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fda16a07-5908-4736-9835-a29ce1f85a7e-metallb-excludel2\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143408 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f6668fc-0d01-4942-abbe-758690c86480-frr-startup\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143434 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-reloader\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.143479 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f6668fc-0d01-4942-abbe-758690c86480-metrics\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.147556 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-metrics-certs\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.147554 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f6668fc-0d01-4942-abbe-758690c86480-metrics-certs\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.148383 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a33768cf-18ec-4cec-94fb-303b0779eb59-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.164848 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ngvn\" (UniqueName: \"kubernetes.io/projected/a33768cf-18ec-4cec-94fb-303b0779eb59-kube-api-access-5ngvn\") pod \"frr-k8s-webhook-server-7df86c4f6c-mnxc8\" (UID: \"a33768cf-18ec-4cec-94fb-303b0779eb59\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.169625 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjxb\" (UniqueName: \"kubernetes.io/projected/1f6668fc-0d01-4942-abbe-758690c86480-kube-api-access-shjxb\") pod \"frr-k8s-zwvdk\" (UID: \"1f6668fc-0d01-4942-abbe-758690c86480\") " pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.169712 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9tr\" (UniqueName: \"kubernetes.io/projected/fda16a07-5908-4736-9835-a29ce1f85a7e-kube-api-access-ww9tr\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.242998 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-metrics-certs\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.243041 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gszl\" (UniqueName: \"kubernetes.io/projected/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-kube-api-access-8gszl\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.243085 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-cert\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.244786 4913 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.247272 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-metrics-certs\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.257794 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-cert\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.261753 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gszl\" (UniqueName: \"kubernetes.io/projected/f59b1ff9-32cd-4fa1-916b-02dd65f8f75c-kube-api-access-8gszl\") pod \"controller-6968d8fdc4-pc8gk\" (UID: \"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c\") " pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.322904 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.455217 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.464799 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.609166 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-pc8gk"] Jan 21 06:47:13 crc kubenswrapper[4913]: W0121 06:47:13.623653 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf59b1ff9_32cd_4fa1_916b_02dd65f8f75c.slice/crio-a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9 WatchSource:0}: Error finding container a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9: Status 404 returned error can't find the container with id a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9 Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.650322 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.650519 4913 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: E0121 06:47:13.650662 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist podName:fda16a07-5908-4736-9835-a29ce1f85a7e nodeName:}" failed. No retries permitted until 2026-01-21 06:47:14.650637836 +0000 UTC m=+724.446997529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist") pod "speaker-qpr6d" (UID: "fda16a07-5908-4736-9835-a29ce1f85a7e") : secret "metallb-memberlist" not found Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.701804 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"c9cd52245594a83181d0378a9d6bc65f4d633eb1a9bb3e77229a0b995736f558"} Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.702856 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pc8gk" event={"ID":"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c","Type":"ContainerStarted","Data":"a9713868134e91f84fe85b40971f5b2d6e534020bd978d6976e64d3369dc0cb9"} Jan 21 06:47:13 crc kubenswrapper[4913]: I0121 06:47:13.910233 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8"] Jan 21 06:47:13 crc kubenswrapper[4913]: W0121 06:47:13.911509 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda33768cf_18ec_4cec_94fb_303b0779eb59.slice/crio-e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745 WatchSource:0}: Error finding container e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745: Status 404 returned error can't find the container with id e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745 Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.668950 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.686998 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fda16a07-5908-4736-9835-a29ce1f85a7e-memberlist\") pod \"speaker-qpr6d\" (UID: \"fda16a07-5908-4736-9835-a29ce1f85a7e\") " pod="metallb-system/speaker-qpr6d" Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.723337 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pc8gk" event={"ID":"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c","Type":"ContainerStarted","Data":"e18186b11fda2f691e21af3347a0ebbc4d394f52fd77f1a4624b5db704c7d3dc"} Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.727258 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" event={"ID":"a33768cf-18ec-4cec-94fb-303b0779eb59","Type":"ContainerStarted","Data":"e63dbdb825ff16f5e3344759be52184d04b1aa275ab47fa03457a802918f8745"} Jan 21 06:47:14 crc kubenswrapper[4913]: I0121 06:47:14.743037 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:14 crc kubenswrapper[4913]: W0121 06:47:14.776972 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda16a07_5908_4736_9835_a29ce1f85a7e.slice/crio-1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b WatchSource:0}: Error finding container 1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b: Status 404 returned error can't find the container with id 1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b Jan 21 06:47:15 crc kubenswrapper[4913]: I0121 06:47:15.737144 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpr6d" event={"ID":"fda16a07-5908-4736-9835-a29ce1f85a7e","Type":"ContainerStarted","Data":"cda30f4c90493441fcfcd6fba83f7eea2db3df233d7da622acec5e58bcdcab38"} Jan 21 06:47:15 crc kubenswrapper[4913]: I0121 06:47:15.737556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpr6d" event={"ID":"fda16a07-5908-4736-9835-a29ce1f85a7e","Type":"ContainerStarted","Data":"1af499bfb0896c1987d890aa44664cc4657ae85657c860b56b31e8266db55f6b"} Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.757281 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-pc8gk" event={"ID":"f59b1ff9-32cd-4fa1-916b-02dd65f8f75c","Type":"ContainerStarted","Data":"884a4fba2e168b17b4247e4425fd28ab08bb1279dbae39a4eefbf202191a6dda"} Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.758884 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.765809 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qpr6d" event={"ID":"fda16a07-5908-4736-9835-a29ce1f85a7e","Type":"ContainerStarted","Data":"f8a1d591bc77857179fddb77b2e338e086699fcdf8729efede4e4dfc1769631f"} Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.766247 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.775263 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-pc8gk" podStartSLOduration=2.9120213169999998 podStartE2EDuration="5.775244568s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:13.721823501 +0000 UTC m=+723.518183184" lastFinishedPulling="2026-01-21 06:47:16.585046762 +0000 UTC m=+726.381406435" observedRunningTime="2026-01-21 06:47:17.773154651 +0000 UTC m=+727.569514334" watchObservedRunningTime="2026-01-21 06:47:17.775244568 +0000 UTC m=+727.571604241" Jan 21 06:47:17 crc kubenswrapper[4913]: I0121 06:47:17.793690 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qpr6d" podStartSLOduration=4.242662338 podStartE2EDuration="5.793672229s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:15.032779066 +0000 UTC m=+724.829138739" lastFinishedPulling="2026-01-21 06:47:16.583788957 +0000 UTC m=+726.380148630" observedRunningTime="2026-01-21 06:47:17.787131718 +0000 UTC m=+727.583491401" watchObservedRunningTime="2026-01-21 06:47:17.793672229 +0000 UTC m=+727.590031922" Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.791875 4913 generic.go:334] "Generic (PLEG): container finished" podID="1f6668fc-0d01-4942-abbe-758690c86480" containerID="ada96d127aff0cb00e74e0727d183d8bf979048c153222547856b7e4a93f96ba" exitCode=0 Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.791957 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerDied","Data":"ada96d127aff0cb00e74e0727d183d8bf979048c153222547856b7e4a93f96ba"} Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.795449 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" event={"ID":"a33768cf-18ec-4cec-94fb-303b0779eb59","Type":"ContainerStarted","Data":"30bea442576368f3c76ee6931135042d67a96aef9643c32bac4d75ef23447de5"} Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.796132 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:21 crc kubenswrapper[4913]: I0121 06:47:21.850006 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" podStartSLOduration=3.066296596 podStartE2EDuration="9.849979148s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:13.913666261 +0000 UTC m=+723.710025934" lastFinishedPulling="2026-01-21 06:47:20.697348803 +0000 UTC m=+730.493708486" observedRunningTime="2026-01-21 06:47:21.842432338 +0000 UTC m=+731.638792011" watchObservedRunningTime="2026-01-21 06:47:21.849979148 +0000 UTC m=+731.646338861" Jan 21 06:47:22 crc kubenswrapper[4913]: I0121 06:47:22.805938 4913 generic.go:334] "Generic (PLEG): container finished" podID="1f6668fc-0d01-4942-abbe-758690c86480" containerID="806ff8ef8e3b96741bc84a91bcfe90f131d9476d6d3a257f3c7a0df319ae0e16" exitCode=0 Jan 21 06:47:22 crc kubenswrapper[4913]: I0121 06:47:22.806012 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerDied","Data":"806ff8ef8e3b96741bc84a91bcfe90f131d9476d6d3a257f3c7a0df319ae0e16"} Jan 21 06:47:23 crc kubenswrapper[4913]: I0121 06:47:23.328621 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-pc8gk" Jan 21 06:47:23 crc kubenswrapper[4913]: I0121 06:47:23.817685 4913 generic.go:334] "Generic (PLEG): container finished" podID="1f6668fc-0d01-4942-abbe-758690c86480" containerID="4152e49662ae3a85bc3bc86a8fc5ba7cc32e765f76b43dddbd288a395e5f7e88" exitCode=0 Jan 21 06:47:23 crc kubenswrapper[4913]: I0121 06:47:23.817868 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerDied","Data":"4152e49662ae3a85bc3bc86a8fc5ba7cc32e765f76b43dddbd288a395e5f7e88"} Jan 21 06:47:24 crc kubenswrapper[4913]: I0121 06:47:24.827897 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"0f79720eefbe1c707a734b0ae09aab1507dfc75991780b1c316368316e9cbd5f"} Jan 21 06:47:24 crc kubenswrapper[4913]: I0121 06:47:24.828295 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"2df5e7c00bde0880968e7e0d861462c796de5f8c66bd5174c44a6c83f5f42029"} Jan 21 06:47:24 crc kubenswrapper[4913]: I0121 06:47:24.828318 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"e0b60c06ce24c930a8be893df63221cb4cb39510343e5a702f4ab245646fe24f"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.839383 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"d9d411daa074b4dd216952569b8b74d7e7fe4c7882bc53c8ddf92aa5e6ea46ea"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.839935 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"2c9178cd32d8ec6b82d069a475beb64e05d88663759500858bdbfb89ff73ef02"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.839968 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-zwvdk" event={"ID":"1f6668fc-0d01-4942-abbe-758690c86480","Type":"ContainerStarted","Data":"9cad03763e3e5997ee9c989b7ff955af2724d7c91bf4731a560aafe148b8f48e"} Jan 21 06:47:25 crc kubenswrapper[4913]: I0121 06:47:25.840001 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:28 crc kubenswrapper[4913]: I0121 06:47:28.455846 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:28 crc kubenswrapper[4913]: I0121 06:47:28.489628 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:28 crc kubenswrapper[4913]: I0121 06:47:28.515544 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-zwvdk" podStartSLOduration=9.437657541 podStartE2EDuration="16.515521001s" podCreationTimestamp="2026-01-21 06:47:12 +0000 UTC" firstStartedPulling="2026-01-21 06:47:13.625665735 +0000 UTC m=+723.422025418" lastFinishedPulling="2026-01-21 06:47:20.703529205 +0000 UTC m=+730.499888878" observedRunningTime="2026-01-21 06:47:25.86715682 +0000 UTC m=+735.663516533" watchObservedRunningTime="2026-01-21 06:47:28.515521001 +0000 UTC m=+738.311880704" Jan 21 06:47:33 crc kubenswrapper[4913]: I0121 06:47:33.473965 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-mnxc8" Jan 21 06:47:34 crc kubenswrapper[4913]: I0121 06:47:34.747790 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qpr6d" Jan 21 06:47:38 crc kubenswrapper[4913]: I0121 06:47:38.319646 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:47:38 crc kubenswrapper[4913]: I0121 06:47:38.320290 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.636298 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.637378 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.639928 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-9v972" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.639947 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.639957 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.700603 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.745319 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"mariadb-operator-index-hlfnm\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.846767 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"mariadb-operator-index-hlfnm\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.866879 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"mariadb-operator-index-hlfnm\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:41 crc kubenswrapper[4913]: I0121 06:47:41.957091 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:42 crc kubenswrapper[4913]: I0121 06:47:42.360050 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:42 crc kubenswrapper[4913]: I0121 06:47:42.966304 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerStarted","Data":"b5b38486a6db7d95112680710d969fd5155e2c4c2915cdad1a7de23e7a82cc84"} Jan 21 06:47:43 crc kubenswrapper[4913]: I0121 06:47:43.461613 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-zwvdk" Jan 21 06:47:43 crc kubenswrapper[4913]: I0121 06:47:43.643928 4913 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 06:47:44 crc kubenswrapper[4913]: I0121 06:47:44.980236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerStarted","Data":"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0"} Jan 21 06:47:45 crc kubenswrapper[4913]: I0121 06:47:45.005122 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-hlfnm" podStartSLOduration=2.473251419 podStartE2EDuration="4.005090759s" podCreationTimestamp="2026-01-21 06:47:41 +0000 UTC" firstStartedPulling="2026-01-21 06:47:42.362743783 +0000 UTC m=+752.159103456" lastFinishedPulling="2026-01-21 06:47:43.894583123 +0000 UTC m=+753.690942796" observedRunningTime="2026-01-21 06:47:44.995428661 +0000 UTC m=+754.791788374" watchObservedRunningTime="2026-01-21 06:47:45.005090759 +0000 UTC m=+754.801450472" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.007112 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.623390 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.624975 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.643535 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.823192 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"mariadb-operator-index-jqn8q\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.925442 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"mariadb-operator-index-jqn8q\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.957499 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"mariadb-operator-index-jqn8q\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.959387 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:46 crc kubenswrapper[4913]: I0121 06:47:46.995302 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-hlfnm" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" containerID="cri-o://ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" gracePeriod=2 Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.180034 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:47:47 crc kubenswrapper[4913]: W0121 06:47:47.188703 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod211a0853_fb6a_4002_98be_aa01c99eaa7d.slice/crio-db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b WatchSource:0}: Error finding container db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b: Status 404 returned error can't find the container with id db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.315848 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.437256 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") pod \"bf14c82c-256f-4096-beb2-4e8be30564aa\" (UID: \"bf14c82c-256f-4096-beb2-4e8be30564aa\") " Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.444779 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27" (OuterVolumeSpecName: "kube-api-access-8gn27") pod "bf14c82c-256f-4096-beb2-4e8be30564aa" (UID: "bf14c82c-256f-4096-beb2-4e8be30564aa"). InnerVolumeSpecName "kube-api-access-8gn27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:47:47 crc kubenswrapper[4913]: I0121 06:47:47.538759 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gn27\" (UniqueName: \"kubernetes.io/projected/bf14c82c-256f-4096-beb2-4e8be30564aa-kube-api-access-8gn27\") on node \"crc\" DevicePath \"\"" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007360 4913 generic.go:334] "Generic (PLEG): container finished" podID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" exitCode=0 Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007430 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hlfnm" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007459 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerDied","Data":"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007492 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hlfnm" event={"ID":"bf14c82c-256f-4096-beb2-4e8be30564aa","Type":"ContainerDied","Data":"b5b38486a6db7d95112680710d969fd5155e2c4c2915cdad1a7de23e7a82cc84"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.007520 4913 scope.go:117] "RemoveContainer" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.012244 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerStarted","Data":"08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.012298 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerStarted","Data":"db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b"} Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.034777 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-jqn8q" podStartSLOduration=1.554799515 podStartE2EDuration="2.034761856s" podCreationTimestamp="2026-01-21 06:47:46 +0000 UTC" firstStartedPulling="2026-01-21 06:47:47.207921456 +0000 UTC m=+757.004281129" lastFinishedPulling="2026-01-21 06:47:47.687883787 +0000 UTC m=+757.484243470" observedRunningTime="2026-01-21 06:47:48.033415669 +0000 UTC m=+757.829775362" watchObservedRunningTime="2026-01-21 06:47:48.034761856 +0000 UTC m=+757.831121529" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.050029 4913 scope.go:117] "RemoveContainer" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" Jan 21 06:47:48 crc kubenswrapper[4913]: E0121 06:47:48.050721 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0\": container with ID starting with ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0 not found: ID does not exist" containerID="ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.050825 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0"} err="failed to get container status \"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0\": rpc error: code = NotFound desc = could not find container \"ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0\": container with ID starting with ee70f5e99f6fe50bcdecf71b5608834d813a8de2331cc586f0ccc65f973269a0 not found: ID does not exist" Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.065048 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.069383 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-hlfnm"] Jan 21 06:47:48 crc kubenswrapper[4913]: I0121 06:47:48.537493 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" path="/var/lib/kubelet/pods/bf14c82c-256f-4096-beb2-4e8be30564aa/volumes" Jan 21 06:47:56 crc kubenswrapper[4913]: I0121 06:47:56.960117 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:56 crc kubenswrapper[4913]: I0121 06:47:56.963330 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:56 crc kubenswrapper[4913]: I0121 06:47:56.995008 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.104107 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.464403 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:47:57 crc kubenswrapper[4913]: E0121 06:47:57.464650 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.464672 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.464798 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf14c82c-256f-4096-beb2-4e8be30564aa" containerName="registry-server" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.465522 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.467281 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.477572 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.576470 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.576653 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.576699 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.678447 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.678519 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.678694 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.679900 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.679967 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.712955 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:57 crc kubenswrapper[4913]: I0121 06:47:57.794384 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:47:58 crc kubenswrapper[4913]: I0121 06:47:58.229891 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:47:59 crc kubenswrapper[4913]: I0121 06:47:59.091116 4913 generic.go:334] "Generic (PLEG): container finished" podID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerID="d9191512905a50023a8bd3340913a6390b0e97c743493bde552499fe3bccd78f" exitCode=0 Jan 21 06:47:59 crc kubenswrapper[4913]: I0121 06:47:59.091214 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"d9191512905a50023a8bd3340913a6390b0e97c743493bde552499fe3bccd78f"} Jan 21 06:47:59 crc kubenswrapper[4913]: I0121 06:47:59.091487 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerStarted","Data":"14c551895737557e89dec36d73483641542cf3a808c34f5b8d47e21ee1bbb538"} Jan 21 06:48:00 crc kubenswrapper[4913]: I0121 06:48:00.103299 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerStarted","Data":"78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe"} Jan 21 06:48:01 crc kubenswrapper[4913]: I0121 06:48:01.112760 4913 generic.go:334] "Generic (PLEG): container finished" podID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerID="78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe" exitCode=0 Jan 21 06:48:01 crc kubenswrapper[4913]: I0121 06:48:01.112826 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe"} Jan 21 06:48:02 crc kubenswrapper[4913]: I0121 06:48:02.126201 4913 generic.go:334] "Generic (PLEG): container finished" podID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerID="e2875581fbd572dea4f4e410e08bce794cd12bf464303d41fbc9d66b0d7fcef6" exitCode=0 Jan 21 06:48:02 crc kubenswrapper[4913]: I0121 06:48:02.126274 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"e2875581fbd572dea4f4e410e08bce794cd12bf464303d41fbc9d66b0d7fcef6"} Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.391666 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.466632 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") pod \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.472366 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz" (OuterVolumeSpecName: "kube-api-access-ndzxz") pod "980a7b2a-b9d1-4935-ac4c-9ac4a4730138" (UID: "980a7b2a-b9d1-4935-ac4c-9ac4a4730138"). InnerVolumeSpecName "kube-api-access-ndzxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.567352 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") pod \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.567447 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") pod \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\" (UID: \"980a7b2a-b9d1-4935-ac4c-9ac4a4730138\") " Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.567778 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzxz\" (UniqueName: \"kubernetes.io/projected/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-kube-api-access-ndzxz\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.568744 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle" (OuterVolumeSpecName: "bundle") pod "980a7b2a-b9d1-4935-ac4c-9ac4a4730138" (UID: "980a7b2a-b9d1-4935-ac4c-9ac4a4730138"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.597260 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util" (OuterVolumeSpecName: "util") pod "980a7b2a-b9d1-4935-ac4c-9ac4a4730138" (UID: "980a7b2a-b9d1-4935-ac4c-9ac4a4730138"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.668632 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:03 crc kubenswrapper[4913]: I0121 06:48:03.668980 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/980a7b2a-b9d1-4935-ac4c-9ac4a4730138-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:04 crc kubenswrapper[4913]: I0121 06:48:04.141263 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" event={"ID":"980a7b2a-b9d1-4935-ac4c-9ac4a4730138","Type":"ContainerDied","Data":"14c551895737557e89dec36d73483641542cf3a808c34f5b8d47e21ee1bbb538"} Jan 21 06:48:04 crc kubenswrapper[4913]: I0121 06:48:04.141329 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c551895737557e89dec36d73483641542cf3a808c34f5b8d47e21ee1bbb538" Jan 21 06:48:04 crc kubenswrapper[4913]: I0121 06:48:04.141356 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.319735 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.320520 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.320628 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.321688 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:48:08 crc kubenswrapper[4913]: I0121 06:48:08.321826 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3" gracePeriod=600 Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179203 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3" exitCode=0 Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179235 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3"} Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179936 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908"} Jan 21 06:48:09 crc kubenswrapper[4913]: I0121 06:48:09.179972 4913 scope.go:117] "RemoveContainer" containerID="0a98163e6aada7ee4b9fa7fd801afc6659904461e6fd2babc62a5d38c872a832" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.273959 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:48:11 crc kubenswrapper[4913]: E0121 06:48:11.274571 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="pull" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274610 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="pull" Jan 21 06:48:11 crc kubenswrapper[4913]: E0121 06:48:11.274637 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="extract" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274644 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="extract" Jan 21 06:48:11 crc kubenswrapper[4913]: E0121 06:48:11.274662 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="util" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274669 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="util" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.274780 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" containerName="extract" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.275177 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.276542 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-klgvv" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.276792 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.281236 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.306109 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.473668 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.473735 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.473810 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.574935 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.575194 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.575261 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.581320 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.581480 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.594135 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"mariadb-operator-controller-manager-844d49f546-dqkph\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:11 crc kubenswrapper[4913]: I0121 06:48:11.894406 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:12 crc kubenswrapper[4913]: I0121 06:48:12.373361 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:48:12 crc kubenswrapper[4913]: W0121 06:48:12.378378 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463ce3c4_98b5_41f1_bf36_f271228094e5.slice/crio-cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4 WatchSource:0}: Error finding container cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4: Status 404 returned error can't find the container with id cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4 Jan 21 06:48:13 crc kubenswrapper[4913]: I0121 06:48:13.213024 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerStarted","Data":"cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4"} Jan 21 06:48:16 crc kubenswrapper[4913]: I0121 06:48:16.228112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerStarted","Data":"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63"} Jan 21 06:48:16 crc kubenswrapper[4913]: I0121 06:48:16.228777 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:16 crc kubenswrapper[4913]: I0121 06:48:16.253017 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" podStartSLOduration=1.8517759059999999 podStartE2EDuration="5.253000129s" podCreationTimestamp="2026-01-21 06:48:11 +0000 UTC" firstStartedPulling="2026-01-21 06:48:12.381551477 +0000 UTC m=+782.177911150" lastFinishedPulling="2026-01-21 06:48:15.7827757 +0000 UTC m=+785.579135373" observedRunningTime="2026-01-21 06:48:16.249807562 +0000 UTC m=+786.046167245" watchObservedRunningTime="2026-01-21 06:48:16.253000129 +0000 UTC m=+786.049359802" Jan 21 06:48:21 crc kubenswrapper[4913]: I0121 06:48:21.898466 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.455734 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.456997 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.459697 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-mqdgg" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.465826 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.570853 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"infra-operator-index-9rr22\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.672242 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"infra-operator-index-9rr22\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.704425 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"infra-operator-index-9rr22\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.772624 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:26 crc kubenswrapper[4913]: I0121 06:48:26.997425 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:48:27 crc kubenswrapper[4913]: W0121 06:48:27.016294 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc40a34d4_0ef1_4aff_bc37_87c27e191d1f.slice/crio-cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5 WatchSource:0}: Error finding container cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5: Status 404 returned error can't find the container with id cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5 Jan 21 06:48:27 crc kubenswrapper[4913]: I0121 06:48:27.300575 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerStarted","Data":"cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5"} Jan 21 06:48:28 crc kubenswrapper[4913]: I0121 06:48:28.310742 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerStarted","Data":"2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f"} Jan 21 06:48:28 crc kubenswrapper[4913]: I0121 06:48:28.327692 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-9rr22" podStartSLOduration=1.4566955400000001 podStartE2EDuration="2.327661935s" podCreationTimestamp="2026-01-21 06:48:26 +0000 UTC" firstStartedPulling="2026-01-21 06:48:27.01820668 +0000 UTC m=+796.814566343" lastFinishedPulling="2026-01-21 06:48:27.889173065 +0000 UTC m=+797.685532738" observedRunningTime="2026-01-21 06:48:28.327555362 +0000 UTC m=+798.123915045" watchObservedRunningTime="2026-01-21 06:48:28.327661935 +0000 UTC m=+798.124021688" Jan 21 06:48:36 crc kubenswrapper[4913]: I0121 06:48:36.774378 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:36 crc kubenswrapper[4913]: I0121 06:48:36.775015 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:36 crc kubenswrapper[4913]: I0121 06:48:36.810649 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:37 crc kubenswrapper[4913]: I0121 06:48:37.402570 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.702653 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.703832 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.716785 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.716793 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.840152 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.840252 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.840313 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.941742 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.941851 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.941913 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.942258 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.942778 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:38 crc kubenswrapper[4913]: I0121 06:48:38.970673 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:39 crc kubenswrapper[4913]: I0121 06:48:39.029188 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:39 crc kubenswrapper[4913]: I0121 06:48:39.271498 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:48:39 crc kubenswrapper[4913]: W0121 06:48:39.280195 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a93fdf_fffb_4344_8ac8_81d8be41eea7.slice/crio-7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5 WatchSource:0}: Error finding container 7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5: Status 404 returned error can't find the container with id 7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5 Jan 21 06:48:39 crc kubenswrapper[4913]: I0121 06:48:39.383011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerStarted","Data":"7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5"} Jan 21 06:48:40 crc kubenswrapper[4913]: I0121 06:48:40.388352 4913 generic.go:334] "Generic (PLEG): container finished" podID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerID="660368d7d30a6dcd15b89683468c16579bae9e6ba5e62cde1ef85f9aba8de9d8" exitCode=0 Jan 21 06:48:40 crc kubenswrapper[4913]: I0121 06:48:40.388393 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"660368d7d30a6dcd15b89683468c16579bae9e6ba5e62cde1ef85f9aba8de9d8"} Jan 21 06:48:41 crc kubenswrapper[4913]: E0121 06:48:41.630215 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a93fdf_fffb_4344_8ac8_81d8be41eea7.slice/crio-conmon-e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9a93fdf_fffb_4344_8ac8_81d8be41eea7.slice/crio-e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:48:42 crc kubenswrapper[4913]: I0121 06:48:42.404006 4913 generic.go:334] "Generic (PLEG): container finished" podID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerID="e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11" exitCode=0 Jan 21 06:48:42 crc kubenswrapper[4913]: I0121 06:48:42.404354 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11"} Jan 21 06:48:43 crc kubenswrapper[4913]: I0121 06:48:43.413851 4913 generic.go:334] "Generic (PLEG): container finished" podID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerID="7ea6d30dbbc206edb2f162346b01f1b70cea4ff52c09855b5688ceae555cd86f" exitCode=0 Jan 21 06:48:43 crc kubenswrapper[4913]: I0121 06:48:43.413917 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"7ea6d30dbbc206edb2f162346b01f1b70cea4ff52c09855b5688ceae555cd86f"} Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.707284 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.819880 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") pod \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.820023 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") pod \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.820095 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") pod \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\" (UID: \"f9a93fdf-fffb-4344-8ac8-81d8be41eea7\") " Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.823053 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle" (OuterVolumeSpecName: "bundle") pod "f9a93fdf-fffb-4344-8ac8-81d8be41eea7" (UID: "f9a93fdf-fffb-4344-8ac8-81d8be41eea7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.829925 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util" (OuterVolumeSpecName: "util") pod "f9a93fdf-fffb-4344-8ac8-81d8be41eea7" (UID: "f9a93fdf-fffb-4344-8ac8-81d8be41eea7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.831052 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f" (OuterVolumeSpecName: "kube-api-access-4wg8f") pod "f9a93fdf-fffb-4344-8ac8-81d8be41eea7" (UID: "f9a93fdf-fffb-4344-8ac8-81d8be41eea7"). InnerVolumeSpecName "kube-api-access-4wg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.921308 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.921362 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:44 crc kubenswrapper[4913]: I0121 06:48:44.921383 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wg8f\" (UniqueName: \"kubernetes.io/projected/f9a93fdf-fffb-4344-8ac8-81d8be41eea7-kube-api-access-4wg8f\") on node \"crc\" DevicePath \"\"" Jan 21 06:48:45 crc kubenswrapper[4913]: I0121 06:48:45.434330 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" event={"ID":"f9a93fdf-fffb-4344-8ac8-81d8be41eea7","Type":"ContainerDied","Data":"7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5"} Jan 21 06:48:45 crc kubenswrapper[4913]: I0121 06:48:45.434729 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7020ed21050712dd68b64eafbf541b2c68c8e84ecd78b8969c197ec97365dda5" Jan 21 06:48:45 crc kubenswrapper[4913]: I0121 06:48:45.434426 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.194998 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:48:54 crc kubenswrapper[4913]: E0121 06:48:54.195638 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="extract" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195653 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="extract" Jan 21 06:48:54 crc kubenswrapper[4913]: E0121 06:48:54.195665 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="util" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195671 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="util" Jan 21 06:48:54 crc kubenswrapper[4913]: E0121 06:48:54.195683 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="pull" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195691 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="pull" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.195819 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" containerName="extract" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.196246 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.198236 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4h7gf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.198496 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.213931 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.243615 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.243658 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.243931 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.344750 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.344847 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.344871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.350997 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.360136 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.361100 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"infra-operator-controller-manager-54c44d6596-mtzxf\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.513903 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:54 crc kubenswrapper[4913]: I0121 06:48:54.748225 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:48:55 crc kubenswrapper[4913]: I0121 06:48:55.493199 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerStarted","Data":"34dad13f69ec8ebac45464947f13649925c0f206b2f50748da475e0fdda03067"} Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.503529 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerStarted","Data":"8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f"} Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.503904 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.526031 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" podStartSLOduration=1.2649362960000001 podStartE2EDuration="3.526015846s" podCreationTimestamp="2026-01-21 06:48:54 +0000 UTC" firstStartedPulling="2026-01-21 06:48:54.759580234 +0000 UTC m=+824.555939917" lastFinishedPulling="2026-01-21 06:48:57.020659794 +0000 UTC m=+826.817019467" observedRunningTime="2026-01-21 06:48:57.522812598 +0000 UTC m=+827.319172271" watchObservedRunningTime="2026-01-21 06:48:57.526015846 +0000 UTC m=+827.322375519" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.726337 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.727427 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.729684 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openshift-service-ca.crt" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.730398 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"galera-openstack-dockercfg-6gtwj" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.732544 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"kube-root-ca.crt" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.732582 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-scripts" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.732561 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"openstack-config-data" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.736252 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.737222 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.745383 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.746535 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.747013 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.756998 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.760628 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892226 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892292 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892333 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892367 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892400 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892440 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892471 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892500 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892533 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892644 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892690 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892723 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892769 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892807 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892852 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892881 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.892900 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.893009 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994268 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994324 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994344 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994367 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994384 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994398 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994429 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994471 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994493 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994517 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994542 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994565 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994628 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994650 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994672 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994692 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994720 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.994741 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995012 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") device mount path \"/mnt/openstack/pv10\"" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995496 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995012 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") device mount path \"/mnt/openstack/pv02\"" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995538 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995659 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.995965 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") device mount path \"/mnt/openstack/pv05\"" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996003 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996443 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996510 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.996550 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.997497 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.997868 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.998001 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:57 crc kubenswrapper[4913]: I0121 06:48:57.999150 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:57.996098 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.016335 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.017840 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.022077 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.035074 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"openstack-galera-2\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.038948 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"openstack-galera-0\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.044612 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-1\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.071829 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.093909 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.104802 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.535969 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.544530 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:48:58 crc kubenswrapper[4913]: I0121 06:48:58.550091 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:48:58 crc kubenswrapper[4913]: W0121 06:48:58.557493 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddedf2cc4_5f64_40c5_83da_cf1e0cfebf6c.slice/crio-275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac WatchSource:0}: Error finding container 275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac: Status 404 returned error can't find the container with id 275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac Jan 21 06:48:58 crc kubenswrapper[4913]: W0121 06:48:58.560774 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedaae817_2cda_4274_bad0_53165cffa224.slice/crio-9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d WatchSource:0}: Error finding container 9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d: Status 404 returned error can't find the container with id 9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d Jan 21 06:48:59 crc kubenswrapper[4913]: I0121 06:48:59.515707 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerStarted","Data":"5f5d4f1ef26e68f7b2d31a9b3d84d0da1ff312a47ab5657edc54afc49f04f096"} Jan 21 06:48:59 crc kubenswrapper[4913]: I0121 06:48:59.516888 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerStarted","Data":"9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d"} Jan 21 06:48:59 crc kubenswrapper[4913]: I0121 06:48:59.521182 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerStarted","Data":"275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac"} Jan 21 06:49:04 crc kubenswrapper[4913]: I0121 06:49:04.517848 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.269051 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.269733 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.272149 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"memcached-config-data" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.272890 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"memcached-memcached-dockercfg-g4l2t" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.287128 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.448345 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.448759 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.448870 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.555212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.555316 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.555334 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.556018 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.557872 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.567462 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerStarted","Data":"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8"} Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.601629 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"memcached-0\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:06 crc kubenswrapper[4913]: I0121 06:49:06.883243 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.065449 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:49:07 crc kubenswrapper[4913]: W0121 06:49:07.070665 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac820b36_83fb_44ca_97b0_6181846a5ef3.slice/crio-7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92 WatchSource:0}: Error finding container 7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92: Status 404 returned error can't find the container with id 7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92 Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.573634 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerStarted","Data":"7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92"} Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.576474 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerStarted","Data":"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49"} Jan 21 06:49:07 crc kubenswrapper[4913]: I0121 06:49:07.578278 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerStarted","Data":"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf"} Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.060404 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.061775 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.064245 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-sdlkw" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.079656 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.189422 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"rabbitmq-cluster-operator-index-5dtkj\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.290783 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"rabbitmq-cluster-operator-index-5dtkj\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.320691 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"rabbitmq-cluster-operator-index-5dtkj\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.379263 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.591624 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerStarted","Data":"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3"} Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.592268 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.787138 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/memcached-0" podStartSLOduration=1.932974258 podStartE2EDuration="3.787106032s" podCreationTimestamp="2026-01-21 06:49:06 +0000 UTC" firstStartedPulling="2026-01-21 06:49:07.072934562 +0000 UTC m=+836.869294235" lastFinishedPulling="2026-01-21 06:49:08.927066336 +0000 UTC m=+838.723426009" observedRunningTime="2026-01-21 06:49:09.612358022 +0000 UTC m=+839.408717705" watchObservedRunningTime="2026-01-21 06:49:09.787106032 +0000 UTC m=+839.583465735" Jan 21 06:49:09 crc kubenswrapper[4913]: I0121 06:49:09.788365 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.597446 4913 generic.go:334] "Generic (PLEG): container finished" podID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" exitCode=0 Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.597518 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerDied","Data":"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49"} Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.600730 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerStarted","Data":"f8fbd38ff1590a71df6d9f315408484aabe627daa085b88b69fdaab05a28c092"} Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.609305 4913 generic.go:334] "Generic (PLEG): container finished" podID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" exitCode=0 Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.609367 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerDied","Data":"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8"} Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.611715 4913 generic.go:334] "Generic (PLEG): container finished" podID="edaae817-2cda-4274-bad0-53165cffa224" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" exitCode=0 Jan 21 06:49:10 crc kubenswrapper[4913]: I0121 06:49:10.612311 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerDied","Data":"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.618101 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerStarted","Data":"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.619850 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerStarted","Data":"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.621185 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerStarted","Data":"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94"} Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.645862 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-0" podStartSLOduration=7.876087523 podStartE2EDuration="15.645843473s" podCreationTimestamp="2026-01-21 06:48:56 +0000 UTC" firstStartedPulling="2026-01-21 06:48:58.559436565 +0000 UTC m=+828.355796238" lastFinishedPulling="2026-01-21 06:49:06.329192515 +0000 UTC m=+836.125552188" observedRunningTime="2026-01-21 06:49:11.644207068 +0000 UTC m=+841.440566741" watchObservedRunningTime="2026-01-21 06:49:11.645843473 +0000 UTC m=+841.442203146" Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.667972 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-1" podStartSLOduration=7.857434921 podStartE2EDuration="15.667953789s" podCreationTimestamp="2026-01-21 06:48:56 +0000 UTC" firstStartedPulling="2026-01-21 06:48:58.5625813 +0000 UTC m=+828.358941013" lastFinishedPulling="2026-01-21 06:49:06.373100198 +0000 UTC m=+836.169459881" observedRunningTime="2026-01-21 06:49:11.665863281 +0000 UTC m=+841.462222954" watchObservedRunningTime="2026-01-21 06:49:11.667953789 +0000 UTC m=+841.464313472" Jan 21 06:49:11 crc kubenswrapper[4913]: I0121 06:49:11.701855 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/openstack-galera-2" podStartSLOduration=7.920944941 podStartE2EDuration="15.701836808s" podCreationTimestamp="2026-01-21 06:48:56 +0000 UTC" firstStartedPulling="2026-01-21 06:48:58.543919718 +0000 UTC m=+828.340279391" lastFinishedPulling="2026-01-21 06:49:06.324811585 +0000 UTC m=+836.121171258" observedRunningTime="2026-01-21 06:49:11.696500271 +0000 UTC m=+841.492859954" watchObservedRunningTime="2026-01-21 06:49:11.701836808 +0000 UTC m=+841.498196481" Jan 21 06:49:15 crc kubenswrapper[4913]: I0121 06:49:15.652703 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerStarted","Data":"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30"} Jan 21 06:49:15 crc kubenswrapper[4913]: I0121 06:49:15.676364 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" podStartSLOduration=1.2871669350000001 podStartE2EDuration="6.676337994s" podCreationTimestamp="2026-01-21 06:49:09 +0000 UTC" firstStartedPulling="2026-01-21 06:49:09.792206161 +0000 UTC m=+839.588565854" lastFinishedPulling="2026-01-21 06:49:15.18137725 +0000 UTC m=+844.977736913" observedRunningTime="2026-01-21 06:49:15.673179828 +0000 UTC m=+845.469539541" watchObservedRunningTime="2026-01-21 06:49:15.676337994 +0000 UTC m=+845.472697707" Jan 21 06:49:16 crc kubenswrapper[4913]: I0121 06:49:16.885489 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.073242 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.073827 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.095137 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.095201 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.105851 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:18 crc kubenswrapper[4913]: I0121 06:49:18.105916 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:19 crc kubenswrapper[4913]: I0121 06:49:19.380059 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:19 crc kubenswrapper[4913]: I0121 06:49:19.380136 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:19 crc kubenswrapper[4913]: I0121 06:49:19.424467 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:20 crc kubenswrapper[4913]: I0121 06:49:20.445405 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:20 crc kubenswrapper[4913]: I0121 06:49:20.523500 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.808910 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.810256 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.813802 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.828435 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.938444 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:26 crc kubenswrapper[4913]: I0121 06:49:26.938530 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.039748 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.039857 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.040710 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.060876 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"root-account-create-update-thtbk\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.186312 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.401905 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:49:27 crc kubenswrapper[4913]: I0121 06:49:27.725927 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerStarted","Data":"e41ffcdab252a256602e7c621f585fee394f0f1b8f113871d9c3d7de9d58193b"} Jan 21 06:49:28 crc kubenswrapper[4913]: I0121 06:49:28.163912 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/openstack-galera-2" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" probeResult="failure" output=< Jan 21 06:49:28 crc kubenswrapper[4913]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 21 06:49:28 crc kubenswrapper[4913]: > Jan 21 06:49:29 crc kubenswrapper[4913]: I0121 06:49:29.416016 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:49:32 crc kubenswrapper[4913]: I0121 06:49:32.775724 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerStarted","Data":"ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a"} Jan 21 06:49:32 crc kubenswrapper[4913]: I0121 06:49:32.792265 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/root-account-create-update-thtbk" podStartSLOduration=6.7922460000000004 podStartE2EDuration="6.792246s" podCreationTimestamp="2026-01-21 06:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:49:32.789713191 +0000 UTC m=+862.586072884" watchObservedRunningTime="2026-01-21 06:49:32.792246 +0000 UTC m=+862.588605673" Jan 21 06:49:34 crc kubenswrapper[4913]: I0121 06:49:34.792047 4913 generic.go:334] "Generic (PLEG): container finished" podID="09733cef-ac9b-4a13-92a5-4b416079180f" containerID="ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a" exitCode=0 Jan 21 06:49:34 crc kubenswrapper[4913]: I0121 06:49:34.792137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerDied","Data":"ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a"} Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.168746 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.294089 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") pod \"09733cef-ac9b-4a13-92a5-4b416079180f\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.294225 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") pod \"09733cef-ac9b-4a13-92a5-4b416079180f\" (UID: \"09733cef-ac9b-4a13-92a5-4b416079180f\") " Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.294753 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09733cef-ac9b-4a13-92a5-4b416079180f" (UID: "09733cef-ac9b-4a13-92a5-4b416079180f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.300274 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45" (OuterVolumeSpecName: "kube-api-access-6wh45") pod "09733cef-ac9b-4a13-92a5-4b416079180f" (UID: "09733cef-ac9b-4a13-92a5-4b416079180f"). InnerVolumeSpecName "kube-api-access-6wh45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.338840 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.395573 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09733cef-ac9b-4a13-92a5-4b416079180f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.395625 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wh45\" (UniqueName: \"kubernetes.io/projected/09733cef-ac9b-4a13-92a5-4b416079180f-kube-api-access-6wh45\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.423378 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.810280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/root-account-create-update-thtbk" event={"ID":"09733cef-ac9b-4a13-92a5-4b416079180f","Type":"ContainerDied","Data":"e41ffcdab252a256602e7c621f585fee394f0f1b8f113871d9c3d7de9d58193b"} Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.810348 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e41ffcdab252a256602e7c621f585fee394f0f1b8f113871d9c3d7de9d58193b" Jan 21 06:49:36 crc kubenswrapper[4913]: I0121 06:49:36.810298 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-thtbk" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.685488 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:49:40 crc kubenswrapper[4913]: E0121 06:49:40.686983 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" containerName="mariadb-account-create-update" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.687059 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" containerName="mariadb-account-create-update" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.687228 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" containerName="mariadb-account-create-update" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.688050 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.689817 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.700527 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.856891 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.857153 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.857305 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.958769 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.958871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.959000 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.959728 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.959819 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:40 crc kubenswrapper[4913]: I0121 06:49:40.991342 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.002477 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.479497 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.847415 4913 generic.go:334] "Generic (PLEG): container finished" podID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerID="436316e77fad673adee43600b81c8e8cb659f723e40fde5ac692b7f2f5e51c80" exitCode=0 Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.847525 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"436316e77fad673adee43600b81c8e8cb659f723e40fde5ac692b7f2f5e51c80"} Jan 21 06:49:41 crc kubenswrapper[4913]: I0121 06:49:41.847751 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerStarted","Data":"c884ae2c47d957e330460d081e220f62d940e2a97f174bd338c9f15f97922f16"} Jan 21 06:49:42 crc kubenswrapper[4913]: I0121 06:49:42.660922 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:42 crc kubenswrapper[4913]: I0121 06:49:42.755994 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:49:43 crc kubenswrapper[4913]: I0121 06:49:43.865407 4913 generic.go:334] "Generic (PLEG): container finished" podID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerID="37009c48c11ee62bd23237579f9cc9c8d427c5cbaddb700f28802586ebc40376" exitCode=0 Jan 21 06:49:43 crc kubenswrapper[4913]: I0121 06:49:43.865504 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"37009c48c11ee62bd23237579f9cc9c8d427c5cbaddb700f28802586ebc40376"} Jan 21 06:49:44 crc kubenswrapper[4913]: I0121 06:49:44.873990 4913 generic.go:334] "Generic (PLEG): container finished" podID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerID="ab7eba0415a79bbb3100d97d9966a99002b3c45fe402ca2d92dfeca4328093d3" exitCode=0 Jan 21 06:49:44 crc kubenswrapper[4913]: I0121 06:49:44.874327 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"ab7eba0415a79bbb3100d97d9966a99002b3c45fe402ca2d92dfeca4328093d3"} Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.235688 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.357304 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") pod \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.357405 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") pod \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.357797 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") pod \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\" (UID: \"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff\") " Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.358120 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle" (OuterVolumeSpecName: "bundle") pod "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" (UID: "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.358289 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.365555 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm" (OuterVolumeSpecName: "kube-api-access-gbbdm") pod "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" (UID: "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff"). InnerVolumeSpecName "kube-api-access-gbbdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.390213 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util" (OuterVolumeSpecName: "util") pod "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" (UID: "8829e790-ce91-4ccb-8b7b-955b5d4cc3ff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.459452 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbbdm\" (UniqueName: \"kubernetes.io/projected/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-kube-api-access-gbbdm\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.459640 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.903681 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" event={"ID":"8829e790-ce91-4ccb-8b7b-955b5d4cc3ff","Type":"ContainerDied","Data":"c884ae2c47d957e330460d081e220f62d940e2a97f174bd338c9f15f97922f16"} Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.903730 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c884ae2c47d957e330460d081e220f62d940e2a97f174bd338c9f15f97922f16" Jan 21 06:49:46 crc kubenswrapper[4913]: I0121 06:49:46.903763 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.024320 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:49:56 crc kubenswrapper[4913]: E0121 06:49:56.026063 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="pull" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026166 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="pull" Jan 21 06:49:56 crc kubenswrapper[4913]: E0121 06:49:56.026260 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="util" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026339 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="util" Jan 21 06:49:56 crc kubenswrapper[4913]: E0121 06:49:56.026419 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="extract" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026499 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="extract" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.026731 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" containerName="extract" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.027296 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.034909 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-wxng4" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.056972 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.184170 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"rabbitmq-cluster-operator-779fc9694b-d8cz5\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.286244 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"rabbitmq-cluster-operator-779fc9694b-d8cz5\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.316262 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"rabbitmq-cluster-operator-779fc9694b-d8cz5\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.351020 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.473500 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.475000 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.492726 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.589541 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.589752 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.589842 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691140 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691231 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691265 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.691726 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.692303 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.710650 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"community-operators-9xvbq\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.813506 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:49:56 crc kubenswrapper[4913]: I0121 06:49:56.816759 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:56.998394 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerStarted","Data":"35b959c40aff587948c5fd74b98b898c0bc76e951ec34079c1bec3b80111a1d1"} Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.124943 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:49:57 crc kubenswrapper[4913]: W0121 06:49:57.129226 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63b56df_1330_4e60_8eb8_000cdd3d6a19.slice/crio-451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07 WatchSource:0}: Error finding container 451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07: Status 404 returned error can't find the container with id 451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07 Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.659312 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.660488 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.663718 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.808005 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.808473 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.808642 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910067 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910152 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910190 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910617 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.910876 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.936160 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"redhat-operators-qvqfr\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:57 crc kubenswrapper[4913]: I0121 06:49:57.984650 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.008657 4913 generic.go:334] "Generic (PLEG): container finished" podID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerID="bc17a3b4e7007c1a6edef6ce6cf75a6a6d868472071ad902e9b4ecff476c27f7" exitCode=0 Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.008709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"bc17a3b4e7007c1a6edef6ce6cf75a6a6d868472071ad902e9b4ecff476c27f7"} Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.008740 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerStarted","Data":"451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07"} Jan 21 06:49:58 crc kubenswrapper[4913]: I0121 06:49:58.445425 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:49:58 crc kubenswrapper[4913]: W0121 06:49:58.457820 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdfc921b_6ade_4913_afd4_4b75ebcead15.slice/crio-f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba WatchSource:0}: Error finding container f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba: Status 404 returned error can't find the container with id f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba Jan 21 06:49:59 crc kubenswrapper[4913]: I0121 06:49:59.018148 4913 generic.go:334] "Generic (PLEG): container finished" podID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" exitCode=0 Jan 21 06:49:59 crc kubenswrapper[4913]: I0121 06:49:59.018408 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498"} Jan 21 06:49:59 crc kubenswrapper[4913]: I0121 06:49:59.018733 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerStarted","Data":"f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.039625 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"7cfbbb260090cca3cda10b88b0e9c196363805754f66ac431ea64beb1eda9291"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.039904 4913 generic.go:334] "Generic (PLEG): container finished" podID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerID="7cfbbb260090cca3cda10b88b0e9c196363805754f66ac431ea64beb1eda9291" exitCode=0 Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.042507 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerStarted","Data":"3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.044773 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerStarted","Data":"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377"} Jan 21 06:50:01 crc kubenswrapper[4913]: I0121 06:50:01.083371 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" podStartSLOduration=1.5687557810000001 podStartE2EDuration="5.083341584s" podCreationTimestamp="2026-01-21 06:49:56 +0000 UTC" firstStartedPulling="2026-01-21 06:49:56.826382822 +0000 UTC m=+886.622742495" lastFinishedPulling="2026-01-21 06:50:00.340968625 +0000 UTC m=+890.137328298" observedRunningTime="2026-01-21 06:50:01.078133292 +0000 UTC m=+890.874492975" watchObservedRunningTime="2026-01-21 06:50:01.083341584 +0000 UTC m=+890.879701297" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.052500 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerStarted","Data":"29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337"} Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.055866 4913 generic.go:334] "Generic (PLEG): container finished" podID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" exitCode=0 Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.055984 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377"} Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.098029 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xvbq" podStartSLOduration=2.605754447 podStartE2EDuration="6.097145551s" podCreationTimestamp="2026-01-21 06:49:56 +0000 UTC" firstStartedPulling="2026-01-21 06:49:58.01047848 +0000 UTC m=+887.806838153" lastFinishedPulling="2026-01-21 06:50:01.501869594 +0000 UTC m=+891.298229257" observedRunningTime="2026-01-21 06:50:02.077482457 +0000 UTC m=+891.873842170" watchObservedRunningTime="2026-01-21 06:50:02.097145551 +0000 UTC m=+891.893505254" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.653451 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.654871 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.662949 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.776194 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.776674 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.776726 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.877705 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.878630 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.879017 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.878988 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.878525 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.936489 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"redhat-marketplace-ngbvn\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:02 crc kubenswrapper[4913]: I0121 06:50:02.980604 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:03 crc kubenswrapper[4913]: I0121 06:50:03.133713 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerStarted","Data":"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e"} Jan 21 06:50:03 crc kubenswrapper[4913]: I0121 06:50:03.156496 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvqfr" podStartSLOduration=3.436100337 podStartE2EDuration="6.156477283s" podCreationTimestamp="2026-01-21 06:49:57 +0000 UTC" firstStartedPulling="2026-01-21 06:49:59.83642477 +0000 UTC m=+889.632784453" lastFinishedPulling="2026-01-21 06:50:02.556801726 +0000 UTC m=+892.353161399" observedRunningTime="2026-01-21 06:50:03.15048078 +0000 UTC m=+892.946840473" watchObservedRunningTime="2026-01-21 06:50:03.156477283 +0000 UTC m=+892.952836956" Jan 21 06:50:03 crc kubenswrapper[4913]: I0121 06:50:03.476818 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:03 crc kubenswrapper[4913]: W0121 06:50:03.497756 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa7f103e_4216_4cf5_b6a7_42b907744bba.slice/crio-ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05 WatchSource:0}: Error finding container ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05: Status 404 returned error can't find the container with id ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05 Jan 21 06:50:04 crc kubenswrapper[4913]: I0121 06:50:04.141722 4913 generic.go:334] "Generic (PLEG): container finished" podID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerID="a300cf1311edbe2e8917475c52980c805a2b63750a7dc8830211bf50b92e71c9" exitCode=0 Jan 21 06:50:04 crc kubenswrapper[4913]: I0121 06:50:04.143519 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"a300cf1311edbe2e8917475c52980c805a2b63750a7dc8830211bf50b92e71c9"} Jan 21 06:50:04 crc kubenswrapper[4913]: I0121 06:50:04.143565 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerStarted","Data":"ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05"} Jan 21 06:50:06 crc kubenswrapper[4913]: I0121 06:50:06.817132 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:06 crc kubenswrapper[4913]: I0121 06:50:06.817466 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:06 crc kubenswrapper[4913]: I0121 06:50:06.858717 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.057345 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.058586 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.060407 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.062033 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-server-conf" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.062040 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cinder-kuttl-tests"/"rabbitmq-plugins-conf" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.062275 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-server-dockercfg-x86ft" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.064335 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"rabbitmq-default-user" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.071296 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186851 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186900 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186936 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.186970 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187028 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187220 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187338 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.187571 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.223619 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289285 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289357 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289381 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289412 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289443 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289462 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289502 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289555 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.289996 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.290038 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.290264 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.291945 4913 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.291972 4913 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e2837abecefc96eeb2280c3452599d3ba232e1d6e20df970e5810ccca23e04e/globalmount\"" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.297113 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.297396 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.298898 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.308805 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.318338 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"rabbitmq-server-0\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.373736 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.807788 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:50:07 crc kubenswrapper[4913]: W0121 06:50:07.811437 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b3d506_1b7a_4e74_8e75_bd5ad371a3e7.slice/crio-f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385 WatchSource:0}: Error finding container f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385: Status 404 returned error can't find the container with id f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385 Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.985626 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:07 crc kubenswrapper[4913]: I0121 06:50:07.985676 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.179371 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerStarted","Data":"f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385"} Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.181628 4913 generic.go:334] "Generic (PLEG): container finished" podID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerID="3aa48f0fbcec5babec897d165283dcc38b0310a8438e263618189600feb1cda5" exitCode=0 Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.181714 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"3aa48f0fbcec5babec897d165283dcc38b0310a8438e263618189600feb1cda5"} Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.319430 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:50:08 crc kubenswrapper[4913]: I0121 06:50:08.319718 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:50:09 crc kubenswrapper[4913]: I0121 06:50:09.042333 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qvqfr" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" probeResult="failure" output=< Jan 21 06:50:09 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 06:50:09 crc kubenswrapper[4913]: > Jan 21 06:50:09 crc kubenswrapper[4913]: I0121 06:50:09.195832 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerStarted","Data":"8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f"} Jan 21 06:50:09 crc kubenswrapper[4913]: I0121 06:50:09.222271 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngbvn" podStartSLOduration=2.638537919 podStartE2EDuration="7.22225414s" podCreationTimestamp="2026-01-21 06:50:02 +0000 UTC" firstStartedPulling="2026-01-21 06:50:04.144173641 +0000 UTC m=+893.940533354" lastFinishedPulling="2026-01-21 06:50:08.727889902 +0000 UTC m=+898.524249575" observedRunningTime="2026-01-21 06:50:09.219045793 +0000 UTC m=+899.015405486" watchObservedRunningTime="2026-01-21 06:50:09.22225414 +0000 UTC m=+899.018613813" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.661853 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.662909 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.665143 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-ldzbh" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.668567 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.686055 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"keystone-operator-index-nvvrn\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.805311 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"keystone-operator-index-nvvrn\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.833921 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"keystone-operator-index-nvvrn\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.980901 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.980956 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:12 crc kubenswrapper[4913]: I0121 06:50:12.984359 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:13 crc kubenswrapper[4913]: I0121 06:50:13.058791 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:13 crc kubenswrapper[4913]: I0121 06:50:13.291117 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:15 crc kubenswrapper[4913]: I0121 06:50:15.449358 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:50:15 crc kubenswrapper[4913]: I0121 06:50:15.449909 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xvbq" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" containerID="cri-o://29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" gracePeriod=2 Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.820206 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.822788 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.825412 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 06:50:16 crc kubenswrapper[4913]: E0121 06:50:16.825524 4913 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-9xvbq" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.052255 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.118681 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.457998 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:18 crc kubenswrapper[4913]: I0121 06:50:18.458374 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngbvn" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" containerID="cri-o://8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f" gracePeriod=2 Jan 21 06:50:19 crc kubenswrapper[4913]: I0121 06:50:19.276374 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9xvbq_f63b56df-1330-4e60-8eb8-000cdd3d6a19/registry-server/0.log" Jan 21 06:50:19 crc kubenswrapper[4913]: I0121 06:50:19.277307 4913 generic.go:334] "Generic (PLEG): container finished" podID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" exitCode=137 Jan 21 06:50:19 crc kubenswrapper[4913]: I0121 06:50:19.277349 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337"} Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.295184 4913 generic.go:334] "Generic (PLEG): container finished" podID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerID="8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f" exitCode=0 Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.295265 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f"} Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.329826 4913 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="rabbitmq:4.1.1-management" Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.329871 4913 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="rabbitmq:4.1.1-management" Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.329987 4913 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:rabbitmq:4.1.1-management,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgplc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_cinder-kuttl-tests(05b3d506-1b7a-4e74-8e75-bd5ad371a3e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 06:50:21 crc kubenswrapper[4913]: E0121 06:50:21.331049 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.355644 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.357479 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9xvbq_f63b56df-1330-4e60-8eb8-000cdd3d6a19/registry-server/0.log" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.359499 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.538063 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") pod \"aa7f103e-4216-4cf5-b6a7-42b907744bba\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.539278 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities" (OuterVolumeSpecName: "utilities") pod "aa7f103e-4216-4cf5-b6a7-42b907744bba" (UID: "aa7f103e-4216-4cf5-b6a7-42b907744bba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.539460 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") pod \"aa7f103e-4216-4cf5-b6a7-42b907744bba\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.540569 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") pod \"aa7f103e-4216-4cf5-b6a7-42b907744bba\" (UID: \"aa7f103e-4216-4cf5-b6a7-42b907744bba\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.541804 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") pod \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.541883 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") pod \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.541913 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") pod \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\" (UID: \"f63b56df-1330-4e60-8eb8-000cdd3d6a19\") " Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.542344 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.543485 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities" (OuterVolumeSpecName: "utilities") pod "f63b56df-1330-4e60-8eb8-000cdd3d6a19" (UID: "f63b56df-1330-4e60-8eb8-000cdd3d6a19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.546309 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2" (OuterVolumeSpecName: "kube-api-access-ddpl2") pod "f63b56df-1330-4e60-8eb8-000cdd3d6a19" (UID: "f63b56df-1330-4e60-8eb8-000cdd3d6a19"). InnerVolumeSpecName "kube-api-access-ddpl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.546407 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5" (OuterVolumeSpecName: "kube-api-access-h7pn5") pod "aa7f103e-4216-4cf5-b6a7-42b907744bba" (UID: "aa7f103e-4216-4cf5-b6a7-42b907744bba"). InnerVolumeSpecName "kube-api-access-h7pn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.572026 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa7f103e-4216-4cf5-b6a7-42b907744bba" (UID: "aa7f103e-4216-4cf5-b6a7-42b907744bba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.618238 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f63b56df-1330-4e60-8eb8-000cdd3d6a19" (UID: "f63b56df-1330-4e60-8eb8-000cdd3d6a19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643494 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7pn5\" (UniqueName: \"kubernetes.io/projected/aa7f103e-4216-4cf5-b6a7-42b907744bba-kube-api-access-h7pn5\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643538 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7f103e-4216-4cf5-b6a7-42b907744bba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643560 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643571 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f63b56df-1330-4e60-8eb8-000cdd3d6a19-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.643581 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddpl2\" (UniqueName: \"kubernetes.io/projected/f63b56df-1330-4e60-8eb8-000cdd3d6a19-kube-api-access-ddpl2\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:21 crc kubenswrapper[4913]: I0121 06:50:21.721309 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.306777 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngbvn" event={"ID":"aa7f103e-4216-4cf5-b6a7-42b907744bba","Type":"ContainerDied","Data":"ec83403672649992562a38b142c400bb32284ba9b3bace3203b8fc8b0ec01b05"} Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.306820 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngbvn" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.307120 4913 scope.go:117] "RemoveContainer" containerID="8cdd23473203a667e25cb8a30b6f2d7a53119efd54a549c5e8e51d1b30ac020f" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.310398 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerStarted","Data":"13b2addf8c21bece7103dc74546b4b535b876e4503f539b9501595a1b88972a6"} Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.315921 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9xvbq_f63b56df-1330-4e60-8eb8-000cdd3d6a19/registry-server/0.log" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.321310 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xvbq" event={"ID":"f63b56df-1330-4e60-8eb8-000cdd3d6a19","Type":"ContainerDied","Data":"451f19926e4ba72260fe1d2fc800db6fdaf656d9a5199a92472da63ac0eb1f07"} Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.321490 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xvbq" Jan 21 06:50:22 crc kubenswrapper[4913]: E0121 06:50:22.324992 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"rabbitmq:4.1.1-management\\\"\"" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.366613 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.373469 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xvbq"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.383236 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.389504 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngbvn"] Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.446284 4913 scope.go:117] "RemoveContainer" containerID="3aa48f0fbcec5babec897d165283dcc38b0310a8438e263618189600feb1cda5" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.460565 4913 scope.go:117] "RemoveContainer" containerID="a300cf1311edbe2e8917475c52980c805a2b63750a7dc8830211bf50b92e71c9" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.490630 4913 scope.go:117] "RemoveContainer" containerID="29bc64ef4e3c3963ab4f79d64951f10eb5a47eb3e5a1c72f8d5864fced162337" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.513033 4913 scope.go:117] "RemoveContainer" containerID="7cfbbb260090cca3cda10b88b0e9c196363805754f66ac431ea64beb1eda9291" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.538873 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" path="/var/lib/kubelet/pods/aa7f103e-4216-4cf5-b6a7-42b907744bba/volumes" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.539682 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" path="/var/lib/kubelet/pods/f63b56df-1330-4e60-8eb8-000cdd3d6a19/volumes" Jan 21 06:50:22 crc kubenswrapper[4913]: I0121 06:50:22.543292 4913 scope.go:117] "RemoveContainer" containerID="bc17a3b4e7007c1a6edef6ce6cf75a6a6d868472071ad902e9b4ecff476c27f7" Jan 21 06:50:23 crc kubenswrapper[4913]: I0121 06:50:23.329355 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerStarted","Data":"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741"} Jan 21 06:50:23 crc kubenswrapper[4913]: I0121 06:50:23.356800 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-nvvrn" podStartSLOduration=10.593800254 podStartE2EDuration="11.356767337s" podCreationTimestamp="2026-01-21 06:50:12 +0000 UTC" firstStartedPulling="2026-01-21 06:50:21.736337877 +0000 UTC m=+911.532697550" lastFinishedPulling="2026-01-21 06:50:22.49930496 +0000 UTC m=+912.295664633" observedRunningTime="2026-01-21 06:50:23.349663544 +0000 UTC m=+913.146023257" watchObservedRunningTime="2026-01-21 06:50:23.356767337 +0000 UTC m=+913.153127050" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061067 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061689 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061705 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061717 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061727 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-utilities" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061741 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061750 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061769 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061778 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061794 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061803 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: E0121 06:50:26.061813 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061821 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="extract-content" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061963 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63b56df-1330-4e60-8eb8-000cdd3d6a19" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.061975 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7f103e-4216-4cf5-b6a7-42b907744bba" containerName="registry-server" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.063128 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.075988 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.207330 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.207414 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.207511 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309111 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309177 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309232 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309749 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.309828 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.337983 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"certified-operators-hc6rw\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.395720 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.451769 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.452082 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvqfr" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" containerID="cri-o://158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" gracePeriod=2 Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.668927 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:26 crc kubenswrapper[4913]: I0121 06:50:26.843758 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.020115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") pod \"bdfc921b-6ade-4913-afd4-4b75ebcead15\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.020414 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") pod \"bdfc921b-6ade-4913-afd4-4b75ebcead15\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.020443 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") pod \"bdfc921b-6ade-4913-afd4-4b75ebcead15\" (UID: \"bdfc921b-6ade-4913-afd4-4b75ebcead15\") " Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.021128 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities" (OuterVolumeSpecName: "utilities") pod "bdfc921b-6ade-4913-afd4-4b75ebcead15" (UID: "bdfc921b-6ade-4913-afd4-4b75ebcead15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.025767 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm" (OuterVolumeSpecName: "kube-api-access-lbljm") pod "bdfc921b-6ade-4913-afd4-4b75ebcead15" (UID: "bdfc921b-6ade-4913-afd4-4b75ebcead15"). InnerVolumeSpecName "kube-api-access-lbljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.122002 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.122043 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbljm\" (UniqueName: \"kubernetes.io/projected/bdfc921b-6ade-4913-afd4-4b75ebcead15-kube-api-access-lbljm\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.126873 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdfc921b-6ade-4913-afd4-4b75ebcead15" (UID: "bdfc921b-6ade-4913-afd4-4b75ebcead15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.224266 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfc921b-6ade-4913-afd4-4b75ebcead15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364242 4913 generic.go:334] "Generic (PLEG): container finished" podID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" exitCode=0 Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364305 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364331 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvqfr" event={"ID":"bdfc921b-6ade-4913-afd4-4b75ebcead15","Type":"ContainerDied","Data":"f65fbc7aa9ce15d34a2121fff46b26b74ba169622ca2ecece61dfd12571181ba"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364347 4913 scope.go:117] "RemoveContainer" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.364472 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvqfr" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.367787 4913 generic.go:334] "Generic (PLEG): container finished" podID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" exitCode=0 Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.367854 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.367901 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerStarted","Data":"054f662824bbecf02694f4aaa504011845fb096af1a535fac9264ca69281bc4f"} Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.385821 4913 scope.go:117] "RemoveContainer" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.409856 4913 scope.go:117] "RemoveContainer" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.415839 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.427907 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvqfr"] Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.449457 4913 scope.go:117] "RemoveContainer" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" Jan 21 06:50:27 crc kubenswrapper[4913]: E0121 06:50:27.449989 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e\": container with ID starting with 158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e not found: ID does not exist" containerID="158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450040 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e"} err="failed to get container status \"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e\": rpc error: code = NotFound desc = could not find container \"158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e\": container with ID starting with 158e8d36b5fd537f49d2df46163f1cb9ff1be91e867bb33f52ac8fd92a5e3b4e not found: ID does not exist" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450070 4913 scope.go:117] "RemoveContainer" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" Jan 21 06:50:27 crc kubenswrapper[4913]: E0121 06:50:27.450692 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377\": container with ID starting with 4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377 not found: ID does not exist" containerID="4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450746 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377"} err="failed to get container status \"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377\": rpc error: code = NotFound desc = could not find container \"4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377\": container with ID starting with 4bcb789ef41a53b9eea5baa395ae391ed7ec94fcf3ae5350fe39d313a1269377 not found: ID does not exist" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.450779 4913 scope.go:117] "RemoveContainer" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" Jan 21 06:50:27 crc kubenswrapper[4913]: E0121 06:50:27.451220 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498\": container with ID starting with 2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498 not found: ID does not exist" containerID="2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498" Jan 21 06:50:27 crc kubenswrapper[4913]: I0121 06:50:27.451249 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498"} err="failed to get container status \"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498\": rpc error: code = NotFound desc = could not find container \"2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498\": container with ID starting with 2d8672fb3a3d106d0267acfa991c5fefb23080e30e4aa5e674138c0783da4498 not found: ID does not exist" Jan 21 06:50:28 crc kubenswrapper[4913]: I0121 06:50:28.375823 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerStarted","Data":"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6"} Jan 21 06:50:28 crc kubenswrapper[4913]: I0121 06:50:28.534658 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" path="/var/lib/kubelet/pods/bdfc921b-6ade-4913-afd4-4b75ebcead15/volumes" Jan 21 06:50:29 crc kubenswrapper[4913]: I0121 06:50:29.394675 4913 generic.go:334] "Generic (PLEG): container finished" podID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" exitCode=0 Jan 21 06:50:29 crc kubenswrapper[4913]: I0121 06:50:29.394811 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6"} Jan 21 06:50:30 crc kubenswrapper[4913]: I0121 06:50:30.405127 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerStarted","Data":"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633"} Jan 21 06:50:30 crc kubenswrapper[4913]: I0121 06:50:30.441260 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hc6rw" podStartSLOduration=1.905263704 podStartE2EDuration="4.441228665s" podCreationTimestamp="2026-01-21 06:50:26 +0000 UTC" firstStartedPulling="2026-01-21 06:50:27.372919695 +0000 UTC m=+917.169279368" lastFinishedPulling="2026-01-21 06:50:29.908884636 +0000 UTC m=+919.705244329" observedRunningTime="2026-01-21 06:50:30.43074945 +0000 UTC m=+920.227109173" watchObservedRunningTime="2026-01-21 06:50:30.441228665 +0000 UTC m=+920.237588388" Jan 21 06:50:32 crc kubenswrapper[4913]: I0121 06:50:32.984462 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:32 crc kubenswrapper[4913]: I0121 06:50:32.984820 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:33 crc kubenswrapper[4913]: I0121 06:50:33.016689 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:33 crc kubenswrapper[4913]: I0121 06:50:33.456986 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.395916 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.396303 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.459046 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:36 crc kubenswrapper[4913]: I0121 06:50:36.521551 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:38 crc kubenswrapper[4913]: I0121 06:50:38.319417 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:50:38 crc kubenswrapper[4913]: I0121 06:50:38.319875 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:50:39 crc kubenswrapper[4913]: I0121 06:50:39.474444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerStarted","Data":"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d"} Jan 21 06:50:42 crc kubenswrapper[4913]: I0121 06:50:42.454221 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:42 crc kubenswrapper[4913]: I0121 06:50:42.455222 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hc6rw" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" containerID="cri-o://79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" gracePeriod=2 Jan 21 06:50:42 crc kubenswrapper[4913]: I0121 06:50:42.956881 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.058822 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") pod \"51f64f62-a622-4f16-9931-159d25ea6a0d\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.058885 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") pod \"51f64f62-a622-4f16-9931-159d25ea6a0d\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.058961 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") pod \"51f64f62-a622-4f16-9931-159d25ea6a0d\" (UID: \"51f64f62-a622-4f16-9931-159d25ea6a0d\") " Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.059774 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities" (OuterVolumeSpecName: "utilities") pod "51f64f62-a622-4f16-9931-159d25ea6a0d" (UID: "51f64f62-a622-4f16-9931-159d25ea6a0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.064682 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx" (OuterVolumeSpecName: "kube-api-access-b45qx") pod "51f64f62-a622-4f16-9931-159d25ea6a0d" (UID: "51f64f62-a622-4f16-9931-159d25ea6a0d"). InnerVolumeSpecName "kube-api-access-b45qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.121461 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51f64f62-a622-4f16-9931-159d25ea6a0d" (UID: "51f64f62-a622-4f16-9931-159d25ea6a0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.160217 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45qx\" (UniqueName: \"kubernetes.io/projected/51f64f62-a622-4f16-9931-159d25ea6a0d-kube-api-access-b45qx\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.160256 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.160274 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f64f62-a622-4f16-9931-159d25ea6a0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504306 4913 generic.go:334] "Generic (PLEG): container finished" podID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" exitCode=0 Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504353 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633"} Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504374 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hc6rw" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504394 4913 scope.go:117] "RemoveContainer" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.504382 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hc6rw" event={"ID":"51f64f62-a622-4f16-9931-159d25ea6a0d","Type":"ContainerDied","Data":"054f662824bbecf02694f4aaa504011845fb096af1a535fac9264ca69281bc4f"} Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.541067 4913 scope.go:117] "RemoveContainer" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.541082 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.548274 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hc6rw"] Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.567443 4913 scope.go:117] "RemoveContainer" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.584883 4913 scope.go:117] "RemoveContainer" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" Jan 21 06:50:43 crc kubenswrapper[4913]: E0121 06:50:43.585284 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633\": container with ID starting with 79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633 not found: ID does not exist" containerID="79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585321 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633"} err="failed to get container status \"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633\": rpc error: code = NotFound desc = could not find container \"79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633\": container with ID starting with 79580e3a46cdfb5a2d2b2aee47843ab648badaa80605470f36d33125cf8de633 not found: ID does not exist" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585341 4913 scope.go:117] "RemoveContainer" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" Jan 21 06:50:43 crc kubenswrapper[4913]: E0121 06:50:43.585644 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6\": container with ID starting with 28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6 not found: ID does not exist" containerID="28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585708 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6"} err="failed to get container status \"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6\": rpc error: code = NotFound desc = could not find container \"28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6\": container with ID starting with 28e4405ca2bf2d37e397ab15732c0b7ee0e03e24c9836ae048087c3243c311c6 not found: ID does not exist" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.585748 4913 scope.go:117] "RemoveContainer" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" Jan 21 06:50:43 crc kubenswrapper[4913]: E0121 06:50:43.586110 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3\": container with ID starting with b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3 not found: ID does not exist" containerID="b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3" Jan 21 06:50:43 crc kubenswrapper[4913]: I0121 06:50:43.586133 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3"} err="failed to get container status \"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3\": rpc error: code = NotFound desc = could not find container \"b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3\": container with ID starting with b524a39b79b86a97f0dccebf9c2957ccf4b32aa860906f3d63cdb6d5ece16fc3 not found: ID does not exist" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.544299 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" path="/var/lib/kubelet/pods/51f64f62-a622-4f16-9931-159d25ea6a0d/volumes" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.718944 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719211 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719226 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719241 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719249 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-content" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719262 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719280 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719296 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719305 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719326 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719334 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="extract-utilities" Jan 21 06:50:44 crc kubenswrapper[4913]: E0121 06:50:44.719348 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719355 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719486 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfc921b-6ade-4913-afd4-4b75ebcead15" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.719504 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f64f62-a622-4f16-9931-159d25ea6a0d" containerName="registry-server" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.720902 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.723022 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.728410 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.901176 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.901231 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:44 crc kubenswrapper[4913]: I0121 06:50:44.901307 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.002936 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003003 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003081 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003519 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.003697 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.030296 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.050417 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.512669 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:50:45 crc kubenswrapper[4913]: I0121 06:50:45.530221 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerStarted","Data":"2b9ab5e998a7f78e1348156646ad20a9c7210ce8c6d3c88a0c223b6660c003b2"} Jan 21 06:50:46 crc kubenswrapper[4913]: I0121 06:50:46.540687 4913 generic.go:334] "Generic (PLEG): container finished" podID="e3feb49b-10bf-4116-91b9-e9b726161892" containerID="f64e19c7af4171a78023ca3711a7eec83f0f3b9547ff3c69e634b90c2c0582db" exitCode=0 Jan 21 06:50:46 crc kubenswrapper[4913]: I0121 06:50:46.540794 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"f64e19c7af4171a78023ca3711a7eec83f0f3b9547ff3c69e634b90c2c0582db"} Jan 21 06:50:47 crc kubenswrapper[4913]: I0121 06:50:47.551436 4913 generic.go:334] "Generic (PLEG): container finished" podID="e3feb49b-10bf-4116-91b9-e9b726161892" containerID="60ac37c77e23483afc0614ffbcd77f3112a7195bb5009179ec07fc76cbf42d75" exitCode=0 Jan 21 06:50:47 crc kubenswrapper[4913]: I0121 06:50:47.551492 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"60ac37c77e23483afc0614ffbcd77f3112a7195bb5009179ec07fc76cbf42d75"} Jan 21 06:50:48 crc kubenswrapper[4913]: I0121 06:50:48.563645 4913 generic.go:334] "Generic (PLEG): container finished" podID="e3feb49b-10bf-4116-91b9-e9b726161892" containerID="461bda799565e5924857f0b3e4f758b75acec0c9a9a9ac5312facf66ecd33abe" exitCode=0 Jan 21 06:50:48 crc kubenswrapper[4913]: I0121 06:50:48.563711 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"461bda799565e5924857f0b3e4f758b75acec0c9a9a9ac5312facf66ecd33abe"} Jan 21 06:50:49 crc kubenswrapper[4913]: I0121 06:50:49.963898 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.077039 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") pod \"e3feb49b-10bf-4116-91b9-e9b726161892\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.077300 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") pod \"e3feb49b-10bf-4116-91b9-e9b726161892\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.077837 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") pod \"e3feb49b-10bf-4116-91b9-e9b726161892\" (UID: \"e3feb49b-10bf-4116-91b9-e9b726161892\") " Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.079360 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle" (OuterVolumeSpecName: "bundle") pod "e3feb49b-10bf-4116-91b9-e9b726161892" (UID: "e3feb49b-10bf-4116-91b9-e9b726161892"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.085848 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994" (OuterVolumeSpecName: "kube-api-access-99994") pod "e3feb49b-10bf-4116-91b9-e9b726161892" (UID: "e3feb49b-10bf-4116-91b9-e9b726161892"). InnerVolumeSpecName "kube-api-access-99994". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.098217 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util" (OuterVolumeSpecName: "util") pod "e3feb49b-10bf-4116-91b9-e9b726161892" (UID: "e3feb49b-10bf-4116-91b9-e9b726161892"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.179374 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.179411 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99994\" (UniqueName: \"kubernetes.io/projected/e3feb49b-10bf-4116-91b9-e9b726161892-kube-api-access-99994\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.179427 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3feb49b-10bf-4116-91b9-e9b726161892-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.583743 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" event={"ID":"e3feb49b-10bf-4116-91b9-e9b726161892","Type":"ContainerDied","Data":"2b9ab5e998a7f78e1348156646ad20a9c7210ce8c6d3c88a0c223b6660c003b2"} Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.583798 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9ab5e998a7f78e1348156646ad20a9c7210ce8c6d3c88a0c223b6660c003b2" Jan 21 06:50:50 crc kubenswrapper[4913]: I0121 06:50:50.583830 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.541237 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:51:00 crc kubenswrapper[4913]: E0121 06:51:00.542872 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="util" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.542941 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="util" Jan 21 06:51:00 crc kubenswrapper[4913]: E0121 06:51:00.543006 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="pull" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543059 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="pull" Jan 21 06:51:00 crc kubenswrapper[4913]: E0121 06:51:00.543123 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="extract" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543181 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="extract" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543341 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" containerName="extract" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.543806 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.546379 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.546518 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-9v6xc" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.553120 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.730325 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.730369 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.730545 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.831424 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.831498 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.831526 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.836522 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.841907 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.859813 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"keystone-operator-controller-manager-54fff4d8f9-xtknv\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:00 crc kubenswrapper[4913]: I0121 06:51:00.868020 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:01 crc kubenswrapper[4913]: I0121 06:51:01.382892 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:51:01 crc kubenswrapper[4913]: I0121 06:51:01.664731 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerStarted","Data":"5f11053bf6e8005edf5c878b1053cb5b2f458f735b16ba02d777871ab59cfd24"} Jan 21 06:51:05 crc kubenswrapper[4913]: I0121 06:51:05.698437 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerStarted","Data":"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2"} Jan 21 06:51:05 crc kubenswrapper[4913]: I0121 06:51:05.699181 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:05 crc kubenswrapper[4913]: I0121 06:51:05.724833 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" podStartSLOduration=1.762090171 podStartE2EDuration="5.72481513s" podCreationTimestamp="2026-01-21 06:51:00 +0000 UTC" firstStartedPulling="2026-01-21 06:51:01.392389693 +0000 UTC m=+951.188749366" lastFinishedPulling="2026-01-21 06:51:05.355114652 +0000 UTC m=+955.151474325" observedRunningTime="2026-01-21 06:51:05.720519623 +0000 UTC m=+955.516879366" watchObservedRunningTime="2026-01-21 06:51:05.72481513 +0000 UTC m=+955.521174803" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.319658 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.320210 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.320306 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.321378 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.321522 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908" gracePeriod=600 Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722450 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908" exitCode=0 Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722513 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908"} Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722702 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6"} Jan 21 06:51:08 crc kubenswrapper[4913]: I0121 06:51:08.722723 4913 scope.go:117] "RemoveContainer" containerID="9d9dff08306ba7a1ea84489cddd781c030c4851b9fa6eef7b12bb8a6ced5e1f3" Jan 21 06:51:10 crc kubenswrapper[4913]: I0121 06:51:10.876126 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:51:11 crc kubenswrapper[4913]: I0121 06:51:11.752121 4913 generic.go:334] "Generic (PLEG): container finished" podID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" exitCode=0 Jan 21 06:51:11 crc kubenswrapper[4913]: I0121 06:51:11.752180 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerDied","Data":"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d"} Jan 21 06:51:12 crc kubenswrapper[4913]: I0121 06:51:12.772107 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerStarted","Data":"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54"} Jan 21 06:51:12 crc kubenswrapper[4913]: I0121 06:51:12.773172 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:51:12 crc kubenswrapper[4913]: I0121 06:51:12.796817 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/rabbitmq-server-0" podStartSLOduration=37.117300665 podStartE2EDuration="1m6.796802856s" podCreationTimestamp="2026-01-21 06:50:06 +0000 UTC" firstStartedPulling="2026-01-21 06:50:07.813826122 +0000 UTC m=+897.610185795" lastFinishedPulling="2026-01-21 06:50:37.493328313 +0000 UTC m=+927.289687986" observedRunningTime="2026-01-21 06:51:12.794097164 +0000 UTC m=+962.590456837" watchObservedRunningTime="2026-01-21 06:51:12.796802856 +0000 UTC m=+962.593162529" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.659849 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.662281 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.664565 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-index-dockercfg-c9j95" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.673558 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.743424 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"cinder-operator-index-4jlfb\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.845657 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"cinder-operator-index-4jlfb\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.871448 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"cinder-operator-index-4jlfb\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:15 crc kubenswrapper[4913]: I0121 06:51:15.998711 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:16 crc kubenswrapper[4913]: I0121 06:51:16.474542 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:51:16 crc kubenswrapper[4913]: W0121 06:51:16.475215 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f61c697_fbcc_4e33_929b_03eacd477d73.slice/crio-d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6 WatchSource:0}: Error finding container d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6: Status 404 returned error can't find the container with id d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6 Jan 21 06:51:16 crc kubenswrapper[4913]: I0121 06:51:16.477057 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 06:51:16 crc kubenswrapper[4913]: I0121 06:51:16.819549 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerStarted","Data":"d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6"} Jan 21 06:51:18 crc kubenswrapper[4913]: I0121 06:51:18.835938 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerStarted","Data":"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d"} Jan 21 06:51:18 crc kubenswrapper[4913]: I0121 06:51:18.866498 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-index-4jlfb" podStartSLOduration=2.157842719 podStartE2EDuration="3.866480861s" podCreationTimestamp="2026-01-21 06:51:15 +0000 UTC" firstStartedPulling="2026-01-21 06:51:16.476837578 +0000 UTC m=+966.273197251" lastFinishedPulling="2026-01-21 06:51:18.18547571 +0000 UTC m=+967.981835393" observedRunningTime="2026-01-21 06:51:18.862857933 +0000 UTC m=+968.659217646" watchObservedRunningTime="2026-01-21 06:51:18.866480861 +0000 UTC m=+968.662840534" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:25.999543 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:26.000343 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:26.041218 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:26 crc kubenswrapper[4913]: I0121 06:51:26.935476 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:51:27 crc kubenswrapper[4913]: I0121 06:51:27.378722 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.113406 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.116289 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.127100 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.129362 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-f64wp" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.256201 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.256826 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.257080 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.358834 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.358923 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.359091 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.359758 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.360082 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.393120 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.452336 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:30 crc kubenswrapper[4913]: I0121 06:51:30.918903 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:51:31 crc kubenswrapper[4913]: I0121 06:51:31.972036 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerStarted","Data":"3f2f464ed3abf9f52d4aa03e74d6bb9a3616be90a49c8face21759e38f52702f"} Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.353428 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.354864 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.358879 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-db-secret" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.359971 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.361062 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.363925 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.368577 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.508465 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.508954 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.509185 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.509450 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.610789 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.611123 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.611212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.611368 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.612159 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.612970 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.636211 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"keystone-db-create-4g6xx\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.642571 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"keystone-c9ef-account-create-update-l49vn\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.692850 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:33 crc kubenswrapper[4913]: I0121 06:51:33.708691 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.029818 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.173152 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:51:34 crc kubenswrapper[4913]: W0121 06:51:34.183789 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ce82f18_1e1d_40f1_8207_428ea9445bc3.slice/crio-a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63 WatchSource:0}: Error finding container a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63: Status 404 returned error can't find the container with id a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63 Jan 21 06:51:34 crc kubenswrapper[4913]: E0121 06:51:34.598745 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ea35b3_9885_4acc_bed4_05b6213940be.slice/crio-e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680.scope\": RecentStats: unable to find data in memory cache]" Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.998687 4913 generic.go:334] "Generic (PLEG): container finished" podID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerID="b5c65ed731440220892793dc0f5f5c1250a99d03d67a71b6685779fcad076adc" exitCode=0 Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.998777 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" event={"ID":"8ce82f18-1e1d-40f1-8207-428ea9445bc3","Type":"ContainerDied","Data":"b5c65ed731440220892793dc0f5f5c1250a99d03d67a71b6685779fcad076adc"} Jan 21 06:51:34 crc kubenswrapper[4913]: I0121 06:51:34.999314 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" event={"ID":"8ce82f18-1e1d-40f1-8207-428ea9445bc3","Type":"ContainerStarted","Data":"a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63"} Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.002198 4913 generic.go:334] "Generic (PLEG): container finished" podID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerID="659f77de9cd51e636b4491f066c76f374e6bc4986d8367fb979a0e570225f47e" exitCode=0 Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.002290 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" event={"ID":"15e33604-9af2-42b5-b1ad-ecd76d4898d4","Type":"ContainerDied","Data":"659f77de9cd51e636b4491f066c76f374e6bc4986d8367fb979a0e570225f47e"} Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.002311 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" event={"ID":"15e33604-9af2-42b5-b1ad-ecd76d4898d4","Type":"ContainerStarted","Data":"e10ac755e324f7bd2a233b427bda4103fca92b78e293af0ac59ee5dd08183dc8"} Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.006066 4913 generic.go:334] "Generic (PLEG): container finished" podID="01ea35b3-9885-4acc-bed4-05b6213940be" containerID="e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680" exitCode=0 Jan 21 06:51:35 crc kubenswrapper[4913]: I0121 06:51:35.006121 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680"} Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.018088 4913 generic.go:334] "Generic (PLEG): container finished" podID="01ea35b3-9885-4acc-bed4-05b6213940be" containerID="57ae751f0ac8e317da709793a9908e104d2805bb250e026f326c599fd971bccb" exitCode=0 Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.018212 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"57ae751f0ac8e317da709793a9908e104d2805bb250e026f326c599fd971bccb"} Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.376874 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.459485 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") pod \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.459583 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") pod \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\" (UID: \"8ce82f18-1e1d-40f1-8207-428ea9445bc3\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.460479 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ce82f18-1e1d-40f1-8207-428ea9445bc3" (UID: "8ce82f18-1e1d-40f1-8207-428ea9445bc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.467385 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4" (OuterVolumeSpecName: "kube-api-access-wcpg4") pod "8ce82f18-1e1d-40f1-8207-428ea9445bc3" (UID: "8ce82f18-1e1d-40f1-8207-428ea9445bc3"). InnerVolumeSpecName "kube-api-access-wcpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.469236 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561092 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") pod \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561188 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") pod \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\" (UID: \"15e33604-9af2-42b5-b1ad-ecd76d4898d4\") " Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561544 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpg4\" (UniqueName: \"kubernetes.io/projected/8ce82f18-1e1d-40f1-8207-428ea9445bc3-kube-api-access-wcpg4\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.561575 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ce82f18-1e1d-40f1-8207-428ea9445bc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.562285 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15e33604-9af2-42b5-b1ad-ecd76d4898d4" (UID: "15e33604-9af2-42b5-b1ad-ecd76d4898d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.566209 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t" (OuterVolumeSpecName: "kube-api-access-c598t") pod "15e33604-9af2-42b5-b1ad-ecd76d4898d4" (UID: "15e33604-9af2-42b5-b1ad-ecd76d4898d4"). InnerVolumeSpecName "kube-api-access-c598t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.663537 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e33604-9af2-42b5-b1ad-ecd76d4898d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:36 crc kubenswrapper[4913]: I0121 06:51:36.663616 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c598t\" (UniqueName: \"kubernetes.io/projected/15e33604-9af2-42b5-b1ad-ecd76d4898d4-kube-api-access-c598t\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.032583 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.032577 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn" event={"ID":"8ce82f18-1e1d-40f1-8207-428ea9445bc3","Type":"ContainerDied","Data":"a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63"} Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.032793 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a149efeb6bd5a232f92a471ec35c43b0420bdd83304a01a87e00d95e97132e63" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.035565 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" event={"ID":"15e33604-9af2-42b5-b1ad-ecd76d4898d4","Type":"ContainerDied","Data":"e10ac755e324f7bd2a233b427bda4103fca92b78e293af0ac59ee5dd08183dc8"} Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.035619 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-create-4g6xx" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.035646 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10ac755e324f7bd2a233b427bda4103fca92b78e293af0ac59ee5dd08183dc8" Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.039282 4913 generic.go:334] "Generic (PLEG): container finished" podID="01ea35b3-9885-4acc-bed4-05b6213940be" containerID="92a35170c3a228e725dc4577bc820bf64539f262c32a337b31f25d4c32fe9af7" exitCode=0 Jan 21 06:51:37 crc kubenswrapper[4913]: I0121 06:51:37.039337 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"92a35170c3a228e725dc4577bc820bf64539f262c32a337b31f25d4c32fe9af7"} Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.327539 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.488322 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") pod \"01ea35b3-9885-4acc-bed4-05b6213940be\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.488454 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") pod \"01ea35b3-9885-4acc-bed4-05b6213940be\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.488533 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") pod \"01ea35b3-9885-4acc-bed4-05b6213940be\" (UID: \"01ea35b3-9885-4acc-bed4-05b6213940be\") " Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.490905 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle" (OuterVolumeSpecName: "bundle") pod "01ea35b3-9885-4acc-bed4-05b6213940be" (UID: "01ea35b3-9885-4acc-bed4-05b6213940be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.494481 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4" (OuterVolumeSpecName: "kube-api-access-zwlh4") pod "01ea35b3-9885-4acc-bed4-05b6213940be" (UID: "01ea35b3-9885-4acc-bed4-05b6213940be"). InnerVolumeSpecName "kube-api-access-zwlh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.505488 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util" (OuterVolumeSpecName: "util") pod "01ea35b3-9885-4acc-bed4-05b6213940be" (UID: "01ea35b3-9885-4acc-bed4-05b6213940be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.589920 4913 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.589961 4913 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/01ea35b3-9885-4acc-bed4-05b6213940be-util\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.589973 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwlh4\" (UniqueName: \"kubernetes.io/projected/01ea35b3-9885-4acc-bed4-05b6213940be-kube-api-access-zwlh4\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850109 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850769 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="pull" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850812 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="pull" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850839 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="extract" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850857 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="extract" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850895 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerName="mariadb-account-create-update" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850914 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerName="mariadb-account-create-update" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850941 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="util" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.850957 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="util" Jan 21 06:51:38 crc kubenswrapper[4913]: E0121 06:51:38.850995 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerName="mariadb-database-create" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851011 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerName="mariadb-database-create" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851272 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" containerName="mariadb-account-create-update" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851304 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" containerName="mariadb-database-create" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.851325 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" containerName="extract" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.852209 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.854862 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.858031 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-c5jhw" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.858344 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.858997 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.871502 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.995966 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:38 crc kubenswrapper[4913]: I0121 06:51:38.996059 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.057523 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" event={"ID":"01ea35b3-9885-4acc-bed4-05b6213940be","Type":"ContainerDied","Data":"3f2f464ed3abf9f52d4aa03e74d6bb9a3616be90a49c8face21759e38f52702f"} Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.057565 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2f464ed3abf9f52d4aa03e74d6bb9a3616be90a49c8face21759e38f52702f" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.057684 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.097722 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.097845 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.102294 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.113690 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"keystone-db-sync-dd79k\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.190462 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:39 crc kubenswrapper[4913]: I0121 06:51:39.686389 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:51:40 crc kubenswrapper[4913]: I0121 06:51:40.067161 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerStarted","Data":"6931bb12a1c9b42fed5b142341ed107474b2250142986fb9d99419b2528a5a14"} Jan 21 06:51:47 crc kubenswrapper[4913]: I0121 06:51:47.124491 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerStarted","Data":"a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1"} Jan 21 06:51:47 crc kubenswrapper[4913]: I0121 06:51:47.154384 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" podStartSLOduration=2.040497727 podStartE2EDuration="9.15436201s" podCreationTimestamp="2026-01-21 06:51:38 +0000 UTC" firstStartedPulling="2026-01-21 06:51:39.705771344 +0000 UTC m=+989.502131017" lastFinishedPulling="2026-01-21 06:51:46.819635627 +0000 UTC m=+996.615995300" observedRunningTime="2026-01-21 06:51:47.146872138 +0000 UTC m=+996.943231821" watchObservedRunningTime="2026-01-21 06:51:47.15436201 +0000 UTC m=+996.950721693" Jan 21 06:51:51 crc kubenswrapper[4913]: I0121 06:51:51.149891 4913 generic.go:334] "Generic (PLEG): container finished" podID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerID="a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1" exitCode=0 Jan 21 06:51:51 crc kubenswrapper[4913]: I0121 06:51:51.150026 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerDied","Data":"a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1"} Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.532204 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.593082 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") pod \"345b0465-d6ca-45e5-bd9d-47a6adacb366\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.593156 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") pod \"345b0465-d6ca-45e5-bd9d-47a6adacb366\" (UID: \"345b0465-d6ca-45e5-bd9d-47a6adacb366\") " Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.618884 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4" (OuterVolumeSpecName: "kube-api-access-wsvn4") pod "345b0465-d6ca-45e5-bd9d-47a6adacb366" (UID: "345b0465-d6ca-45e5-bd9d-47a6adacb366"). InnerVolumeSpecName "kube-api-access-wsvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.693237 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data" (OuterVolumeSpecName: "config-data") pod "345b0465-d6ca-45e5-bd9d-47a6adacb366" (UID: "345b0465-d6ca-45e5-bd9d-47a6adacb366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.695011 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsvn4\" (UniqueName: \"kubernetes.io/projected/345b0465-d6ca-45e5-bd9d-47a6adacb366-kube-api-access-wsvn4\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:52 crc kubenswrapper[4913]: I0121 06:51:52.695040 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/345b0465-d6ca-45e5-bd9d-47a6adacb366-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.165265 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" event={"ID":"345b0465-d6ca-45e5-bd9d-47a6adacb366","Type":"ContainerDied","Data":"6931bb12a1c9b42fed5b142341ed107474b2250142986fb9d99419b2528a5a14"} Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.165299 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6931bb12a1c9b42fed5b142341ed107474b2250142986fb9d99419b2528a5a14" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.165326 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-db-sync-dd79k" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.363315 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:51:53 crc kubenswrapper[4913]: E0121 06:51:53.363542 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerName="keystone-db-sync" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.363553 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerName="keystone-db-sync" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.363683 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" containerName="keystone-db-sync" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.364119 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366069 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366839 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366950 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"osp-secret" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.366883 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.371381 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-c5jhw" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.389786 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405228 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405281 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405304 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405331 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.405354 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.506934 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507006 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507034 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507058 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.507081 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.510470 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.510881 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.510964 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.512902 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.533047 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"keystone-bootstrap-p8tbb\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:53 crc kubenswrapper[4913]: I0121 06:51:53.683632 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:54 crc kubenswrapper[4913]: I0121 06:51:54.138880 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:51:54 crc kubenswrapper[4913]: I0121 06:51:54.174241 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerStarted","Data":"075d4a9a9bb4a3c21cd06c76917b83915cf7f052402d9e8109d8ea058367eccd"} Jan 21 06:51:55 crc kubenswrapper[4913]: I0121 06:51:55.183189 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerStarted","Data":"65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3"} Jan 21 06:51:57 crc kubenswrapper[4913]: I0121 06:51:57.197918 4913 generic.go:334] "Generic (PLEG): container finished" podID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerID="65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3" exitCode=0 Jan 21 06:51:57 crc kubenswrapper[4913]: I0121 06:51:57.198120 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerDied","Data":"65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3"} Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.475623 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.477868 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.483414 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wlhbb" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.483602 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-service-cert" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.500141 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.567420 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.588474 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.588568 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.588626 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689510 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689666 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689708 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689727 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.689780 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") pod \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\" (UID: \"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56\") " Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.690013 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.691097 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.691148 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.697759 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.698343 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts" (OuterVolumeSpecName: "scripts") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.698404 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96" (OuterVolumeSpecName: "kube-api-access-p4c96") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "kube-api-access-p4c96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.703840 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.706405 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.706469 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.713080 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data" (OuterVolumeSpecName: "config-data") pod "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" (UID: "694c0c16-1814-40f7-b7a8-c6f4d7ee7a56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.736301 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"cinder-operator-controller-manager-57ddd6455-fxhz2\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792852 4913 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792897 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4c96\" (UniqueName: \"kubernetes.io/projected/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-kube-api-access-p4c96\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792910 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792925 4913 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.792935 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:51:58 crc kubenswrapper[4913]: I0121 06:51:58.868944 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.117625 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.210725 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.210723 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-bootstrap-p8tbb" event={"ID":"694c0c16-1814-40f7-b7a8-c6f4d7ee7a56","Type":"ContainerDied","Data":"075d4a9a9bb4a3c21cd06c76917b83915cf7f052402d9e8109d8ea058367eccd"} Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.210879 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075d4a9a9bb4a3c21cd06c76917b83915cf7f052402d9e8109d8ea058367eccd" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.212457 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerStarted","Data":"f82866f4a640b05de032b2242387c51f49628b80b3fcaf42729718719aa9d672"} Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.305178 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:51:59 crc kubenswrapper[4913]: E0121 06:51:59.305486 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerName="keystone-bootstrap" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.305511 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerName="keystone-bootstrap" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.305692 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" containerName="keystone-bootstrap" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.306172 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310243 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-scripts" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310512 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-config-data" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310677 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.310804 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"keystone-keystone-dockercfg-c5jhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.318278 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407005 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407084 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407149 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407187 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.407210 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512526 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512621 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512678 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512760 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.512843 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.517272 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.517286 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.517407 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.522325 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.530234 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"keystone-8b78684d-zcwhw\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:51:59 crc kubenswrapper[4913]: I0121 06:51:59.622278 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:52:00 crc kubenswrapper[4913]: I0121 06:52:00.055650 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.229677 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerStarted","Data":"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a"} Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.230061 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.231166 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerStarted","Data":"d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588"} Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.231222 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerStarted","Data":"319d2fc6458bad5a006b1117b9ecf9841ebe516000026a3a782671bea30c10cd"} Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.231346 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.251346 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" podStartSLOduration=1.923891597 podStartE2EDuration="3.25132972s" podCreationTimestamp="2026-01-21 06:51:58 +0000 UTC" firstStartedPulling="2026-01-21 06:51:59.127422539 +0000 UTC m=+1008.923782212" lastFinishedPulling="2026-01-21 06:52:00.454860622 +0000 UTC m=+1010.251220335" observedRunningTime="2026-01-21 06:52:01.247780764 +0000 UTC m=+1011.044140477" watchObservedRunningTime="2026-01-21 06:52:01.25132972 +0000 UTC m=+1011.047689393" Jan 21 06:52:01 crc kubenswrapper[4913]: I0121 06:52:01.270228 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" podStartSLOduration=2.27019848 podStartE2EDuration="2.27019848s" podCreationTimestamp="2026-01-21 06:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:01.270165749 +0000 UTC m=+1011.066525432" watchObservedRunningTime="2026-01-21 06:52:01.27019848 +0000 UTC m=+1011.066558193" Jan 21 06:52:08 crc kubenswrapper[4913]: I0121 06:52:08.872678 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.310187 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.311428 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.317106 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.404998 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.405824 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.407673 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.417895 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.428524 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.428709 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530417 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530489 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530610 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.530766 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.531170 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.574431 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"cinder-db-create-88w4d\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.631945 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.632143 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.632212 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.633021 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.656584 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"cinder-a707-account-create-update-c2xg2\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:14 crc kubenswrapper[4913]: I0121 06:52:14.719105 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.065115 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.161402 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.336210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerStarted","Data":"8a3f20a49f6d57365eb79b7bf4e963d8c25f5eb4c885817d083623c4901b1ce7"} Jan 21 06:52:15 crc kubenswrapper[4913]: I0121 06:52:15.337886 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerStarted","Data":"b5393f25576b08afc85732da2f72e652c47836419a36e49a4a89ca0fdc5ced01"} Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.352418 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerStarted","Data":"adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79"} Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.353398 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerStarted","Data":"bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613"} Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.370801 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-create-88w4d" podStartSLOduration=3.370783723 podStartE2EDuration="3.370783723s" podCreationTimestamp="2026-01-21 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:17.367849213 +0000 UTC m=+1027.164208896" watchObservedRunningTime="2026-01-21 06:52:17.370783723 +0000 UTC m=+1027.167143396" Jan 21 06:52:17 crc kubenswrapper[4913]: I0121 06:52:17.388026 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" podStartSLOduration=3.388006608 podStartE2EDuration="3.388006608s" podCreationTimestamp="2026-01-21 06:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:17.384826432 +0000 UTC m=+1027.181186125" watchObservedRunningTime="2026-01-21 06:52:17.388006608 +0000 UTC m=+1027.184366301" Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.365547 4913 generic.go:334] "Generic (PLEG): container finished" podID="84e5eed1-ff67-483b-808d-466413987e09" containerID="adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79" exitCode=0 Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.365650 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerDied","Data":"adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79"} Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.369140 4913 generic.go:334] "Generic (PLEG): container finished" podID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerID="bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613" exitCode=0 Jan 21 06:52:18 crc kubenswrapper[4913]: I0121 06:52:18.369210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerDied","Data":"bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613"} Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.807258 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.817273 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909365 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") pod \"84e5eed1-ff67-483b-808d-466413987e09\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909460 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") pod \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909539 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") pod \"84e5eed1-ff67-483b-808d-466413987e09\" (UID: \"84e5eed1-ff67-483b-808d-466413987e09\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.909576 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") pod \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\" (UID: \"de97e815-d3a3-4a3d-81e2-6054f65b82f0\") " Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.910288 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de97e815-d3a3-4a3d-81e2-6054f65b82f0" (UID: "de97e815-d3a3-4a3d-81e2-6054f65b82f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.910434 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84e5eed1-ff67-483b-808d-466413987e09" (UID: "84e5eed1-ff67-483b-808d-466413987e09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.914275 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j" (OuterVolumeSpecName: "kube-api-access-txp7j") pod "84e5eed1-ff67-483b-808d-466413987e09" (UID: "84e5eed1-ff67-483b-808d-466413987e09"). InnerVolumeSpecName "kube-api-access-txp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:52:19 crc kubenswrapper[4913]: I0121 06:52:19.915182 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt" (OuterVolumeSpecName: "kube-api-access-8kjpt") pod "de97e815-d3a3-4a3d-81e2-6054f65b82f0" (UID: "de97e815-d3a3-4a3d-81e2-6054f65b82f0"). InnerVolumeSpecName "kube-api-access-8kjpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011632 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txp7j\" (UniqueName: \"kubernetes.io/projected/84e5eed1-ff67-483b-808d-466413987e09-kube-api-access-txp7j\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011705 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de97e815-d3a3-4a3d-81e2-6054f65b82f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011734 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84e5eed1-ff67-483b-808d-466413987e09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.011746 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kjpt\" (UniqueName: \"kubernetes.io/projected/de97e815-d3a3-4a3d-81e2-6054f65b82f0-kube-api-access-8kjpt\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.393161 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" event={"ID":"de97e815-d3a3-4a3d-81e2-6054f65b82f0","Type":"ContainerDied","Data":"b5393f25576b08afc85732da2f72e652c47836419a36e49a4a89ca0fdc5ced01"} Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.393221 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5393f25576b08afc85732da2f72e652c47836419a36e49a4a89ca0fdc5ced01" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.393221 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.396079 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-88w4d" event={"ID":"84e5eed1-ff67-483b-808d-466413987e09","Type":"ContainerDied","Data":"8a3f20a49f6d57365eb79b7bf4e963d8c25f5eb4c885817d083623c4901b1ce7"} Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.396147 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3f20a49f6d57365eb79b7bf4e963d8c25f5eb4c885817d083623c4901b1ce7" Jan 21 06:52:20 crc kubenswrapper[4913]: I0121 06:52:20.396238 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-88w4d" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.744284 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:52:24 crc kubenswrapper[4913]: E0121 06:52:24.744901 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerName="mariadb-account-create-update" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.744914 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerName="mariadb-account-create-update" Jan 21 06:52:24 crc kubenswrapper[4913]: E0121 06:52:24.744927 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e5eed1-ff67-483b-808d-466413987e09" containerName="mariadb-database-create" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.744933 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e5eed1-ff67-483b-808d-466413987e09" containerName="mariadb-database-create" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.745053 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" containerName="mariadb-account-create-update" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.745073 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e5eed1-ff67-483b-808d-466413987e09" containerName="mariadb-database-create" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.745449 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.747875 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-stbww" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.749304 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.749666 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.766997 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889243 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889329 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889405 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889440 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.889467 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990748 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990839 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.990945 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.991022 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.991131 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:24 crc kubenswrapper[4913]: I0121 06:52:24.998020 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.000673 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.001393 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.024553 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"cinder-db-sync-5h9sh\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.077186 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:25 crc kubenswrapper[4913]: I0121 06:52:25.573747 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:52:26 crc kubenswrapper[4913]: I0121 06:52:26.444644 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerStarted","Data":"43414530e40244b9d1e55c4f915e73508410496306477b92733ae02210ce7e56"} Jan 21 06:52:31 crc kubenswrapper[4913]: I0121 06:52:31.018229 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:52:40 crc kubenswrapper[4913]: I0121 06:52:40.566736 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerStarted","Data":"9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a"} Jan 21 06:52:40 crc kubenswrapper[4913]: I0121 06:52:40.589441 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" podStartSLOduration=2.698076102 podStartE2EDuration="16.589419442s" podCreationTimestamp="2026-01-21 06:52:24 +0000 UTC" firstStartedPulling="2026-01-21 06:52:25.58265337 +0000 UTC m=+1035.379013043" lastFinishedPulling="2026-01-21 06:52:39.47399672 +0000 UTC m=+1049.270356383" observedRunningTime="2026-01-21 06:52:40.585371623 +0000 UTC m=+1050.381731306" watchObservedRunningTime="2026-01-21 06:52:40.589419442 +0000 UTC m=+1050.385779235" Jan 21 06:52:45 crc kubenswrapper[4913]: I0121 06:52:45.613037 4913 generic.go:334] "Generic (PLEG): container finished" podID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerID="9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a" exitCode=0 Jan 21 06:52:45 crc kubenswrapper[4913]: I0121 06:52:45.613112 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerDied","Data":"9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a"} Jan 21 06:52:46 crc kubenswrapper[4913]: I0121 06:52:46.955508 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037913 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037943 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.037976 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.038007 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") pod \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\" (UID: \"431d7c8b-5c95-4534-8cba-dd55885fc5cb\") " Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.038244 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.043849 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.044838 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5" (OuterVolumeSpecName: "kube-api-access-lqqs5") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "kube-api-access-lqqs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.047695 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts" (OuterVolumeSpecName: "scripts") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.091684 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data" (OuterVolumeSpecName: "config-data") pod "431d7c8b-5c95-4534-8cba-dd55885fc5cb" (UID: "431d7c8b-5c95-4534-8cba-dd55885fc5cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139334 4913 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139366 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139376 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/431d7c8b-5c95-4534-8cba-dd55885fc5cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139384 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/431d7c8b-5c95-4534-8cba-dd55885fc5cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.139393 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqqs5\" (UniqueName: \"kubernetes.io/projected/431d7c8b-5c95-4534-8cba-dd55885fc5cb-kube-api-access-lqqs5\") on node \"crc\" DevicePath \"\"" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.633362 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" event={"ID":"431d7c8b-5c95-4534-8cba-dd55885fc5cb","Type":"ContainerDied","Data":"43414530e40244b9d1e55c4f915e73508410496306477b92733ae02210ce7e56"} Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.633776 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43414530e40244b9d1e55c4f915e73508410496306477b92733ae02210ce7e56" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.633447 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-5h9sh" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.978270 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:52:47 crc kubenswrapper[4913]: E0121 06:52:47.979056 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerName="cinder-db-sync" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.979081 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerName="cinder-db-sync" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.979496 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" containerName="cinder-db-sync" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.982921 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.988390 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-stbww" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.990182 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.990437 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:52:47 crc kubenswrapper[4913]: I0121 06:52:47.991439 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.014906 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.044289 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.045941 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.050129 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051401 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051446 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051467 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051496 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.051512 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.055619 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.082643 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.084683 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.087790 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.115896 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153120 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153156 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153174 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153192 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153213 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153228 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153245 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153261 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153282 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153306 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153321 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153338 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153354 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153368 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153383 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153400 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153415 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153439 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153457 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153488 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153504 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153669 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153692 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153706 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153733 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153756 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153770 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153790 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153808 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153826 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153842 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153863 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.153879 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.154827 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.170790 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.172314 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.177121 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.177189 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"cinder-scheduler-0\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.245853 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.246766 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.252938 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255242 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255284 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255315 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255333 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255353 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255386 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255408 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255433 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255460 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255459 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255482 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255505 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255538 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255539 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255560 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255567 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255585 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255610 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255630 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255717 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255533 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255736 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255770 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255718 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255819 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255823 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255867 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255903 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255924 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255947 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255933 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.255981 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256017 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256054 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256063 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256089 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256098 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256131 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256136 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256097 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256171 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256146 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256217 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256271 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256274 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256312 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256342 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.256422 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.264187 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.264674 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.264910 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.265499 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.266394 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.269060 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.269665 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.276456 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"cinder-backup-0\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.290922 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"cinder-volume-volume1-0\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.307725 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357202 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357778 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357868 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.357995 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.358154 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.358262 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.360915 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.420055 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459281 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459451 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459551 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459693 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459758 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459872 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.459982 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.460419 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.465577 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.467221 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.474708 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.478734 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"cinder-api-0\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:48 crc kubenswrapper[4913]: I0121 06:52:48.616739 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.428187 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.462574 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: W0121 06:52:49.466184 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff633750_3c62_48b5_b977_f1b4f42b9b7e.slice/crio-e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1 WatchSource:0}: Error finding container e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1: Status 404 returned error can't find the container with id e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1 Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.524983 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: W0121 06:52:49.539711 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb256a85e_47c9_4195_9732_d58250fd3f42.slice/crio-8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf WatchSource:0}: Error finding container 8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf: Status 404 returned error can't find the container with id 8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.588687 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:52:49 crc kubenswrapper[4913]: W0121 06:52:49.598258 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c8afc51_5054_46d8_a16d_e07541ff4af7.slice/crio-0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465 WatchSource:0}: Error finding container 0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465: Status 404 returned error can't find the container with id 0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465 Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.645141 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerStarted","Data":"8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf"} Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.646389 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerStarted","Data":"f981181d61a91c8c4ac8291fd1a2e334c01d5ff76df7a42189afca1df38c5352"} Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.647304 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerStarted","Data":"e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1"} Jan 21 06:52:49 crc kubenswrapper[4913]: I0121 06:52:49.648484 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465"} Jan 21 06:52:50 crc kubenswrapper[4913]: I0121 06:52:50.657214 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerStarted","Data":"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.678817 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerStarted","Data":"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.679310 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.681870 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerStarted","Data":"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.681943 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerStarted","Data":"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729"} Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.704652 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.7046246959999998 podStartE2EDuration="3.704624696s" podCreationTimestamp="2026-01-21 06:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:52:51.700078103 +0000 UTC m=+1061.496437786" watchObservedRunningTime="2026-01-21 06:52:51.704624696 +0000 UTC m=+1061.500984389" Jan 21 06:52:51 crc kubenswrapper[4913]: I0121 06:52:51.730519 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=3.997357781 podStartE2EDuration="4.730493133s" podCreationTimestamp="2026-01-21 06:52:47 +0000 UTC" firstStartedPulling="2026-01-21 06:52:49.443024754 +0000 UTC m=+1059.239384437" lastFinishedPulling="2026-01-21 06:52:50.176160086 +0000 UTC m=+1059.972519789" observedRunningTime="2026-01-21 06:52:51.728065977 +0000 UTC m=+1061.524425660" watchObservedRunningTime="2026-01-21 06:52:51.730493133 +0000 UTC m=+1061.526852846" Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.696000 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.696717 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.697829 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerStarted","Data":"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.697871 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerStarted","Data":"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282"} Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.721091 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.567464159 podStartE2EDuration="4.721069041s" podCreationTimestamp="2026-01-21 06:52:48 +0000 UTC" firstStartedPulling="2026-01-21 06:52:49.601671889 +0000 UTC m=+1059.398031562" lastFinishedPulling="2026-01-21 06:52:51.755276761 +0000 UTC m=+1061.551636444" observedRunningTime="2026-01-21 06:52:52.719568001 +0000 UTC m=+1062.515927674" watchObservedRunningTime="2026-01-21 06:52:52.721069041 +0000 UTC m=+1062.517428724" Jan 21 06:52:52 crc kubenswrapper[4913]: I0121 06:52:52.754457 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.545032853 podStartE2EDuration="4.754432339s" podCreationTimestamp="2026-01-21 06:52:48 +0000 UTC" firstStartedPulling="2026-01-21 06:52:49.544305993 +0000 UTC m=+1059.340665716" lastFinishedPulling="2026-01-21 06:52:51.753705519 +0000 UTC m=+1061.550065202" observedRunningTime="2026-01-21 06:52:52.74922697 +0000 UTC m=+1062.545586643" watchObservedRunningTime="2026-01-21 06:52:52.754432339 +0000 UTC m=+1062.550792042" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.308110 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.361412 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.421451 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.722843 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700" exitCode=1 Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.722993 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700"} Jan 21 06:52:53 crc kubenswrapper[4913]: I0121 06:52:53.723516 4913 scope.go:117] "RemoveContainer" containerID="4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700" Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.737099 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56" exitCode=1 Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.737222 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56"} Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.738656 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3"} Jan 21 06:52:54 crc kubenswrapper[4913]: I0121 06:52:54.737817 4913 scope.go:117] "RemoveContainer" containerID="a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56" Jan 21 06:52:55 crc kubenswrapper[4913]: I0121 06:52:55.776846 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b"} Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.793042 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" exitCode=1 Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.794741 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.794853 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.795288 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" exitCode=1 Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.793108 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b"} Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.795855 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3"} Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.795947 4913 scope.go:117] "RemoveContainer" containerID="a9b3cc0af2adfe7efb2ed4f9ad4fd2852ebe6cb1acc7efaad4b55dc016bc0d56" Jan 21 06:52:56 crc kubenswrapper[4913]: E0121 06:52:56.801421 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:52:56 crc kubenswrapper[4913]: I0121 06:52:56.878285 4913 scope.go:117] "RemoveContainer" containerID="4a4ff119e229bc6d5b17f65d9b0cd44700918af9be3f7e55f8c5fb01237f1700" Jan 21 06:52:57 crc kubenswrapper[4913]: I0121 06:52:57.361788 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:57 crc kubenswrapper[4913]: I0121 06:52:57.808970 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:57 crc kubenswrapper[4913]: I0121 06:52:57.809004 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:57 crc kubenswrapper[4913]: E0121 06:52:57.809277 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.361750 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.361817 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.574511 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.606097 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.816497 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:58 crc kubenswrapper[4913]: I0121 06:52:58.816533 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:58 crc kubenswrapper[4913]: E0121 06:52:58.816927 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:52:59 crc kubenswrapper[4913]: I0121 06:52:59.824276 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:52:59 crc kubenswrapper[4913]: I0121 06:52:59.824624 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:52:59 crc kubenswrapper[4913]: E0121 06:52:59.825010 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:00 crc kubenswrapper[4913]: I0121 06:53:00.546851 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.873012 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.874920 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.887886 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976345 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976421 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976486 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976666 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:01 crc kubenswrapper[4913]: I0121 06:53:01.976691 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077067 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077337 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077534 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077642 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077744 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.077838 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.093710 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.096241 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.097464 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.107441 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"cinder-scheduler-1\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.206639 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.676328 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:02 crc kubenswrapper[4913]: I0121 06:53:02.847181 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerStarted","Data":"b16ed940864ef946c18ba1dbbbda2af9f6fa0f1dd70aecdf23a72e769b81c37b"} Jan 21 06:53:03 crc kubenswrapper[4913]: I0121 06:53:03.859556 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerStarted","Data":"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732"} Jan 21 06:53:03 crc kubenswrapper[4913]: I0121 06:53:03.859960 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerStarted","Data":"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d"} Jan 21 06:53:03 crc kubenswrapper[4913]: I0121 06:53:03.888145 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-1" podStartSLOduration=2.888126501 podStartE2EDuration="2.888126501s" podCreationTimestamp="2026-01-21 06:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:03.880478455 +0000 UTC m=+1073.676838128" watchObservedRunningTime="2026-01-21 06:53:03.888126501 +0000 UTC m=+1073.684486174" Jan 21 06:53:07 crc kubenswrapper[4913]: I0121 06:53:07.207047 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:08 crc kubenswrapper[4913]: I0121 06:53:08.319701 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:53:08 crc kubenswrapper[4913]: I0121 06:53:08.320090 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.429119 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.498317 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.499612 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.515966 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.527848 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.528134 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652035 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652262 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652280 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652294 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.652352 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754044 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754170 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754202 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754224 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754246 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.754344 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.760261 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.760692 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.761406 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.772718 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"cinder-scheduler-2\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.822502 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:12 crc kubenswrapper[4913]: I0121 06:53:12.938406 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca"} Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.252813 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:13 crc kubenswrapper[4913]: W0121 06:53:13.255051 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3556b2f0_f34e_47d9_b864_c0a7e8b6989c.slice/crio-a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20 WatchSource:0}: Error finding container a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20: Status 404 returned error can't find the container with id a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20 Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.950620 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerStarted","Data":"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499"} Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.951140 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerStarted","Data":"a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20"} Jan 21 06:53:13 crc kubenswrapper[4913]: I0121 06:53:13.955552 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0"} Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.968871 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerStarted","Data":"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c"} Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.971441 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" exitCode=1 Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.971480 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0"} Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.971506 4913 scope.go:117] "RemoveContainer" containerID="bef814ff3b1f6cd53aae3c01833bedb13439e6f010687d784dfc4e05c36c7df3" Jan 21 06:53:14 crc kubenswrapper[4913]: I0121 06:53:14.972000 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:14 crc kubenswrapper[4913]: E0121 06:53:14.972338 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.017654 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-2" podStartSLOduration=3.01762066 podStartE2EDuration="3.01762066s" podCreationTimestamp="2026-01-21 06:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:15.009969344 +0000 UTC m=+1084.806329027" watchObservedRunningTime="2026-01-21 06:53:15.01762066 +0000 UTC m=+1084.813980363" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.983583 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" exitCode=1 Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.983702 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca"} Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.984077 4913 scope.go:117] "RemoveContainer" containerID="016e2cf6b7ce540e88876673f3a1992376b4d602718f5a2fa318915d4763b57b" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.984466 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:15 crc kubenswrapper[4913]: I0121 06:53:15.984539 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:15 crc kubenswrapper[4913]: E0121 06:53:15.985016 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:17 crc kubenswrapper[4913]: I0121 06:53:17.004691 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:17 crc kubenswrapper[4913]: I0121 06:53:17.005722 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:17 crc kubenswrapper[4913]: E0121 06:53:17.006376 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:17 crc kubenswrapper[4913]: I0121 06:53:17.822859 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.361052 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.361457 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.361479 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.362423 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:18 crc kubenswrapper[4913]: I0121 06:53:18.362446 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:18 crc kubenswrapper[4913]: E0121 06:53:18.362857 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.046486 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.526839 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.527334 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" containerID="cri-o://ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" gracePeriod=30 Jan 21 06:53:23 crc kubenswrapper[4913]: I0121 06:53:23.527461 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-2" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" containerID="cri-o://172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" gracePeriod=30 Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.068231 4913 generic.go:334] "Generic (PLEG): container finished" podID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" exitCode=0 Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.068285 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerDied","Data":"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c"} Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.688504 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844235 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844344 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844402 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844424 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") pod \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\" (UID: \"3556b2f0-f34e-47d9-b864-c0a7e8b6989c\") " Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.844754 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.851140 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts" (OuterVolumeSpecName: "scripts") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.853010 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd" (OuterVolumeSpecName: "kube-api-access-vb9jd") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "kube-api-access-vb9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.853807 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.930425 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data" (OuterVolumeSpecName: "config-data") pod "3556b2f0-f34e-47d9-b864-c0a7e8b6989c" (UID: "3556b2f0-f34e-47d9-b864-c0a7e8b6989c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946262 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946313 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946337 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb9jd\" (UniqueName: \"kubernetes.io/projected/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-kube-api-access-vb9jd\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946356 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:24 crc kubenswrapper[4913]: I0121 06:53:24.946379 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3556b2f0-f34e-47d9-b864-c0a7e8b6989c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076803 4913 generic.go:334] "Generic (PLEG): container finished" podID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" exitCode=0 Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076848 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerDied","Data":"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499"} Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076877 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-2" event={"ID":"3556b2f0-f34e-47d9-b864-c0a7e8b6989c","Type":"ContainerDied","Data":"a0a751dc704f2515759c7f96e7be28ddc86003aa9a55b8f029fce3c70bfbcf20"} Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.076899 4913 scope.go:117] "RemoveContainer" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.077051 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-2" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.110449 4913 scope.go:117] "RemoveContainer" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.115722 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.124851 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-2"] Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.131110 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.131346 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" containerID="cri-o://586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" gracePeriod=30 Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.131710 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-1" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" containerID="cri-o://dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" gracePeriod=30 Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.153981 4913 scope.go:117] "RemoveContainer" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" Jan 21 06:53:25 crc kubenswrapper[4913]: E0121 06:53:25.154638 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c\": container with ID starting with 172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c not found: ID does not exist" containerID="172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.154679 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c"} err="failed to get container status \"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c\": rpc error: code = NotFound desc = could not find container \"172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c\": container with ID starting with 172cf7f2bfcacc9be9a5aff1971b61fd3936885e0557fb280727c5371456291c not found: ID does not exist" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.154705 4913 scope.go:117] "RemoveContainer" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" Jan 21 06:53:25 crc kubenswrapper[4913]: E0121 06:53:25.155201 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499\": container with ID starting with ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499 not found: ID does not exist" containerID="ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499" Jan 21 06:53:25 crc kubenswrapper[4913]: I0121 06:53:25.155235 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499"} err="failed to get container status \"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499\": rpc error: code = NotFound desc = could not find container \"ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499\": container with ID starting with ec7c9a9813d1953c7c00ecd078f2fad0067c1445f4ef8fdad219558f93b5d499 not found: ID does not exist" Jan 21 06:53:26 crc kubenswrapper[4913]: I0121 06:53:26.090460 4913 generic.go:334] "Generic (PLEG): container finished" podID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" exitCode=0 Jan 21 06:53:26 crc kubenswrapper[4913]: I0121 06:53:26.090607 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerDied","Data":"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732"} Jan 21 06:53:26 crc kubenswrapper[4913]: I0121 06:53:26.539192 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" path="/var/lib/kubelet/pods/3556b2f0-f34e-47d9-b864-c0a7e8b6989c/volumes" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.731575 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831649 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831698 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831809 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831863 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") pod \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\" (UID: \"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c\") " Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.831968 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.832164 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.837207 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.838157 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts" (OuterVolumeSpecName: "scripts") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.838860 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj" (OuterVolumeSpecName: "kube-api-access-w9dlj") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "kube-api-access-w9dlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.918819 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data" (OuterVolumeSpecName: "config-data") pod "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" (UID: "2f8a65c6-8b25-40a7-ac8c-432fc6232b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.933911 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.934098 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dlj\" (UniqueName: \"kubernetes.io/projected/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-kube-api-access-w9dlj\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.934201 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:29 crc kubenswrapper[4913]: I0121 06:53:29.934280 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.126884 4913 generic.go:334] "Generic (PLEG): container finished" podID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" exitCode=0 Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.126949 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerDied","Data":"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d"} Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.127002 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-1" event={"ID":"2f8a65c6-8b25-40a7-ac8c-432fc6232b2c","Type":"ContainerDied","Data":"b16ed940864ef946c18ba1dbbbda2af9f6fa0f1dd70aecdf23a72e769b81c37b"} Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.127031 4913 scope.go:117] "RemoveContainer" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.127097 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-1" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.163755 4913 scope.go:117] "RemoveContainer" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.185449 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.194800 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-1"] Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.207974 4913 scope.go:117] "RemoveContainer" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.209489 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732\": container with ID starting with dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732 not found: ID does not exist" containerID="dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.209623 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732"} err="failed to get container status \"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732\": rpc error: code = NotFound desc = could not find container \"dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732\": container with ID starting with dee0365cc3699f883a512ada059863638b4e179d9d5915116004ee7a3526a732 not found: ID does not exist" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.209665 4913 scope.go:117] "RemoveContainer" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.210420 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d\": container with ID starting with 586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d not found: ID does not exist" containerID="586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.210481 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d"} err="failed to get container status \"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d\": rpc error: code = NotFound desc = could not find container \"586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d\": container with ID starting with 586060f9557f2e5b3d8fb0752b124543451f0c64d1144795de007fa4cbcb953d not found: ID does not exist" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.544538 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" path="/var/lib/kubelet/pods/2f8a65c6-8b25-40a7-ac8c-432fc6232b2c/volumes" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912351 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912815 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912844 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912897 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912911 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912935 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912954 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: E0121 06:53:30.912974 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.912987 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913205 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913227 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="probe" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913244 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="3556b2f0-f34e-47d9-b864-c0a7e8b6989c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.913278 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8a65c6-8b25-40a7-ac8c-432fc6232b2c" containerName="cinder-scheduler" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.914481 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:30 crc kubenswrapper[4913]: I0121 06:53:30.934056 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052786 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052855 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052899 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.052935 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053119 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053239 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053321 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053387 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053444 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053529 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053553 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053629 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053745 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.053781 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155128 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155248 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155329 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155353 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155380 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155444 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155449 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155495 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155562 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155638 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155666 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155707 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155749 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155780 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155797 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155811 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155848 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155886 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155895 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155913 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.155953 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.156041 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.156071 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.161843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.163558 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.165268 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.185841 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"cinder-backup-1\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.243646 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.526467 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.526847 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:31 crc kubenswrapper[4913]: E0121 06:53:31.527113 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 20s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:31 crc kubenswrapper[4913]: I0121 06:53:31.717394 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:31 crc kubenswrapper[4913]: W0121 06:53:31.725810 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33e852d1_210d_4846_b7bf_b0a2dba9b6d2.slice/crio-97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071 WatchSource:0}: Error finding container 97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071: Status 404 returned error can't find the container with id 97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071 Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.154301 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerStarted","Data":"8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69"} Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.155755 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerStarted","Data":"880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa"} Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.155829 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerStarted","Data":"97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071"} Jan 21 06:53:32 crc kubenswrapper[4913]: I0121 06:53:32.176827 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-1" podStartSLOduration=2.176809849 podStartE2EDuration="2.176809849s" podCreationTimestamp="2026-01-21 06:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:32.172982777 +0000 UTC m=+1101.969342450" watchObservedRunningTime="2026-01-21 06:53:32.176809849 +0000 UTC m=+1101.973169522" Jan 21 06:53:36 crc kubenswrapper[4913]: I0121 06:53:36.244291 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:36 crc kubenswrapper[4913]: I0121 06:53:36.485298 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.243893 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.245461 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.261786 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360693 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360743 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360772 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360801 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360865 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360922 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360957 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.360982 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361031 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361061 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361104 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361127 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361188 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.361287 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462813 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462920 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462974 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463000 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462919 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463028 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.462976 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463092 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463153 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463114 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463200 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463237 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463269 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463292 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463404 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463487 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463660 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463729 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463786 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463865 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463755 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.463943 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.464032 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.464112 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.468532 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.469394 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.469528 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.484150 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"cinder-backup-2\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:37 crc kubenswrapper[4913]: I0121 06:53:37.565638 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.029069 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.200709 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerStarted","Data":"2792866368f04fea46de0be2740781f256cfc0dc091be293aaeeb16fba60fbb5"} Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.319408 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:53:38 crc kubenswrapper[4913]: I0121 06:53:38.320101 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:53:39 crc kubenswrapper[4913]: I0121 06:53:39.214716 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerStarted","Data":"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4"} Jan 21 06:53:39 crc kubenswrapper[4913]: I0121 06:53:39.215284 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerStarted","Data":"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4"} Jan 21 06:53:39 crc kubenswrapper[4913]: I0121 06:53:39.243345 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-2" podStartSLOduration=2.243321824 podStartE2EDuration="2.243321824s" podCreationTimestamp="2026-01-21 06:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:53:39.243204481 +0000 UTC m=+1109.039564164" watchObservedRunningTime="2026-01-21 06:53:39.243321824 +0000 UTC m=+1109.039681527" Jan 21 06:53:42 crc kubenswrapper[4913]: I0121 06:53:42.566121 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:46 crc kubenswrapper[4913]: I0121 06:53:46.527161 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:46 crc kubenswrapper[4913]: I0121 06:53:46.527477 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:47 crc kubenswrapper[4913]: I0121 06:53:47.296277 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e"} Jan 21 06:53:47 crc kubenswrapper[4913]: I0121 06:53:47.297026 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerStarted","Data":"22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542"} Jan 21 06:53:47 crc kubenswrapper[4913]: I0121 06:53:47.749495 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.308750 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" exitCode=1 Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.308809 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e"} Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.308884 4913 scope.go:117] "RemoveContainer" containerID="d36b8a191a8e6a1d01d0c56da5c485468eb3282e86c57e40eeb89041484b71e0" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.309604 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:48 crc kubenswrapper[4913]: E0121 06:53:48.309852 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.361040 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.419342 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.419651 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" containerID="cri-o://b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" gracePeriod=30 Jan 21 06:53:48 crc kubenswrapper[4913]: I0121 06:53:48.419750 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-2" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" containerID="cri-o://e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" gracePeriod=30 Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.320255 4913 generic.go:334] "Generic (PLEG): container finished" podID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" exitCode=1 Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.320684 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542"} Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.320740 4913 scope.go:117] "RemoveContainer" containerID="6be39869666a295bac68612c22875e40094643f553c0e7d0118e3b4a2d8697ca" Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.321829 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.321908 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:49 crc kubenswrapper[4913]: E0121 06:53:49.322455 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.330054 4913 generic.go:334] "Generic (PLEG): container finished" podID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" exitCode=0 Jan 21 06:53:49 crc kubenswrapper[4913]: I0121 06:53:49.330460 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerDied","Data":"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4"} Jan 21 06:53:50 crc kubenswrapper[4913]: I0121 06:53:50.342780 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:50 crc kubenswrapper[4913]: I0121 06:53:50.342820 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:50 crc kubenswrapper[4913]: E0121 06:53:50.343139 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:51 crc kubenswrapper[4913]: I0121 06:53:51.361655 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:51 crc kubenswrapper[4913]: I0121 06:53:51.362661 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:51 crc kubenswrapper[4913]: I0121 06:53:51.362685 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:51 crc kubenswrapper[4913]: E0121 06:53:51.363063 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:53 crc kubenswrapper[4913]: I0121 06:53:53.361116 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:53:53 crc kubenswrapper[4913]: I0121 06:53:53.362333 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:53:53 crc kubenswrapper[4913]: I0121 06:53:53.362351 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:53:53 crc kubenswrapper[4913]: E0121 06:53:53.362668 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.072407 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251199 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251294 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run" (OuterVolumeSpecName: "run") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251309 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251346 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251376 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251402 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251416 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251432 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251463 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251518 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251555 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251616 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251649 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251559 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251579 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251553 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251649 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys" (OuterVolumeSpecName: "sys") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251776 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251807 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251831 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251861 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") pod \"61fafe77-c820-4eff-8892-f1e725d2ec2d\" (UID: \"61fafe77-c820-4eff-8892-f1e725d2ec2d\") " Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251924 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.251979 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252029 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev" (OuterVolumeSpecName: "dev") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252343 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252357 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252366 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252376 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252388 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252396 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252404 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252413 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252421 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.252429 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/61fafe77-c820-4eff-8892-f1e725d2ec2d-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.257004 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.257808 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd" (OuterVolumeSpecName: "kube-api-access-cjnxd") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "kube-api-access-cjnxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.263617 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts" (OuterVolumeSpecName: "scripts") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.352085 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data" (OuterVolumeSpecName: "config-data") pod "61fafe77-c820-4eff-8892-f1e725d2ec2d" (UID: "61fafe77-c820-4eff-8892-f1e725d2ec2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355233 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355334 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355373 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61fafe77-c820-4eff-8892-f1e725d2ec2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.355447 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjnxd\" (UniqueName: \"kubernetes.io/projected/61fafe77-c820-4eff-8892-f1e725d2ec2d-kube-api-access-cjnxd\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377103 4913 generic.go:334] "Generic (PLEG): container finished" podID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" exitCode=0 Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377158 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerDied","Data":"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4"} Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-2" event={"ID":"61fafe77-c820-4eff-8892-f1e725d2ec2d","Type":"ContainerDied","Data":"2792866368f04fea46de0be2740781f256cfc0dc091be293aaeeb16fba60fbb5"} Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377178 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-2" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.377271 4913 scope.go:117] "RemoveContainer" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.405556 4913 scope.go:117] "RemoveContainer" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.418404 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.428185 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-2"] Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.434582 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.434894 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" containerID="cri-o://880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa" gracePeriod=30 Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.435430 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-1" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" containerID="cri-o://8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69" gracePeriod=30 Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.446580 4913 scope.go:117] "RemoveContainer" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" Jan 21 06:53:54 crc kubenswrapper[4913]: E0121 06:53:54.447190 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4\": container with ID starting with e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4 not found: ID does not exist" containerID="e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.447219 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4"} err="failed to get container status \"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4\": rpc error: code = NotFound desc = could not find container \"e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4\": container with ID starting with e640931330dd3b8c58d9b2c43d8ba21a084d778a265ffe79142bdac485c11ae4 not found: ID does not exist" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.447258 4913 scope.go:117] "RemoveContainer" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" Jan 21 06:53:54 crc kubenswrapper[4913]: E0121 06:53:54.447625 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4\": container with ID starting with b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4 not found: ID does not exist" containerID="b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.447641 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4"} err="failed to get container status \"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4\": rpc error: code = NotFound desc = could not find container \"b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4\": container with ID starting with b43018453b2aff063f7740c801d291a10046a8528568a28df7116cd606d928b4 not found: ID does not exist" Jan 21 06:53:54 crc kubenswrapper[4913]: I0121 06:53:54.535573 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" path="/var/lib/kubelet/pods/61fafe77-c820-4eff-8892-f1e725d2ec2d/volumes" Jan 21 06:53:56 crc kubenswrapper[4913]: I0121 06:53:56.929225 4913 generic.go:334] "Generic (PLEG): container finished" podID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerID="8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69" exitCode=0 Jan 21 06:53:56 crc kubenswrapper[4913]: I0121 06:53:56.929280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerDied","Data":"8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69"} Jan 21 06:53:58 crc kubenswrapper[4913]: I0121 06:53:58.948866 4913 generic.go:334] "Generic (PLEG): container finished" podID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerID="880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa" exitCode=0 Jan 21 06:53:58 crc kubenswrapper[4913]: I0121 06:53:58.949007 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerDied","Data":"880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa"} Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.710517 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876450 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876509 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876544 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876610 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876649 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876678 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876714 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876767 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876786 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876803 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876820 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876844 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876881 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.876923 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") pod \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\" (UID: \"33e852d1-210d-4846-b7bf-b0a2dba9b6d2\") " Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883742 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys" (OuterVolumeSpecName: "sys") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883803 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883850 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883881 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883922 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883937 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883944 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883984 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.884004 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev" (OuterVolumeSpecName: "dev") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.883996 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run" (OuterVolumeSpecName: "run") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.887389 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk" (OuterVolumeSpecName: "kube-api-access-zw4dk") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "kube-api-access-zw4dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.887854 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.888621 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts" (OuterVolumeSpecName: "scripts") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.965323 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-1" event={"ID":"33e852d1-210d-4846-b7bf-b0a2dba9b6d2","Type":"ContainerDied","Data":"97d88e8a2bdbc48a7dd49916f669da7ba2d02d4f1eef684afa89823f7c081071"} Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.965466 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-1" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.966691 4913 scope.go:117] "RemoveContainer" containerID="8da99449a33b9c4818a12d3451ca956c161e23ef45279ac9bed82ac48f6d6e69" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979688 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979746 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979774 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979798 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979822 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979847 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979874 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw4dk\" (UniqueName: \"kubernetes.io/projected/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-kube-api-access-zw4dk\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979902 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979957 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.979982 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.980005 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.980029 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.980085 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:53:59 crc kubenswrapper[4913]: I0121 06:53:59.989807 4913 scope.go:117] "RemoveContainer" containerID="880b2d46b49b312035d32804bfa463417936302ace5321d2b076de9344864aaa" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.002292 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data" (OuterVolumeSpecName: "config-data") pod "33e852d1-210d-4846-b7bf-b0a2dba9b6d2" (UID: "33e852d1-210d-4846-b7bf-b0a2dba9b6d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.082684 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e852d1-210d-4846-b7bf-b0a2dba9b6d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.300743 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.304933 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-1"] Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.538319 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" path="/var/lib/kubelet/pods/33e852d1-210d-4846-b7bf-b0a2dba9b6d2/volumes" Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.977179 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.977496 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" containerID="cri-o://8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" gracePeriod=30 Jan 21 06:54:00 crc kubenswrapper[4913]: I0121 06:54:00.977611 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" containerID="cri-o://a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" gracePeriod=30 Jan 21 06:54:01 crc kubenswrapper[4913]: I0121 06:54:01.991361 4913 generic.go:334] "Generic (PLEG): container finished" podID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" exitCode=143 Jan 21 06:54:01 crc kubenswrapper[4913]: I0121 06:54:01.991814 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerDied","Data":"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c"} Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.133430 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.90:8776/healthcheck\": read tcp 10.217.0.2:46466->10.217.0.90:8776: read: connection reset by peer" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.560164 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655559 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655700 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655752 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655841 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655883 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655922 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") pod \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\" (UID: \"ff633750-3c62-48b5-b977-f1b4f42b9b7e\") " Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.655977 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.656319 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff633750-3c62-48b5-b977-f1b4f42b9b7e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.656488 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs" (OuterVolumeSpecName: "logs") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.676104 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.676219 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts" (OuterVolumeSpecName: "scripts") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.677327 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv" (OuterVolumeSpecName: "kube-api-access-g2nvv") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "kube-api-access-g2nvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.691454 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data" (OuterVolumeSpecName: "config-data") pod "ff633750-3c62-48b5-b977-f1b4f42b9b7e" (UID: "ff633750-3c62-48b5-b977-f1b4f42b9b7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758573 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758685 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff633750-3c62-48b5-b977-f1b4f42b9b7e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758706 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2nvv\" (UniqueName: \"kubernetes.io/projected/ff633750-3c62-48b5-b977-f1b4f42b9b7e-kube-api-access-g2nvv\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758723 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:04 crc kubenswrapper[4913]: I0121 06:54:04.758741 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff633750-3c62-48b5-b977-f1b4f42b9b7e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.021831 4913 generic.go:334] "Generic (PLEG): container finished" podID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" exitCode=0 Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.021916 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerDied","Data":"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828"} Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.021980 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.022415 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"ff633750-3c62-48b5-b977-f1b4f42b9b7e","Type":"ContainerDied","Data":"e1fddda6c1e6decf5e5643506dbc75a4dd729be32e59911c8fadb68f6bccd2a1"} Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.022793 4913 scope.go:117] "RemoveContainer" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.055276 4913 scope.go:117] "RemoveContainer" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.065404 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.077233 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.090332 4913 scope.go:117] "RemoveContainer" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" Jan 21 06:54:05 crc kubenswrapper[4913]: E0121 06:54:05.090749 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828\": container with ID starting with a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828 not found: ID does not exist" containerID="a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.090845 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828"} err="failed to get container status \"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828\": rpc error: code = NotFound desc = could not find container \"a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828\": container with ID starting with a0c1979d205720c8488ba47e8769168fad3c7358e15382b9581f2205f2f9b828 not found: ID does not exist" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.090880 4913 scope.go:117] "RemoveContainer" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" Jan 21 06:54:05 crc kubenswrapper[4913]: E0121 06:54:05.091294 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c\": container with ID starting with 8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c not found: ID does not exist" containerID="8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c" Jan 21 06:54:05 crc kubenswrapper[4913]: I0121 06:54:05.091320 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c"} err="failed to get container status \"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c\": rpc error: code = NotFound desc = could not find container \"8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c\": container with ID starting with 8acd0b792f355bb0aab7d4ba6a44660f3ea45326aa32469c72549919a58c5f0c not found: ID does not exist" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.450947 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451676 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451698 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451718 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451728 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451752 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451763 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451784 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451793 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451807 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451816 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.451854 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.451866 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452113 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452132 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452150 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452172 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e852d1-210d-4846-b7bf-b0a2dba9b6d2" containerName="cinder-backup" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452187 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fafe77-c820-4eff-8892-f1e725d2ec2d" containerName="probe" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.452203 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" containerName="cinder-api-log" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.453277 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.463342 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.463976 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.464890 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.471312 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.473368 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.484680 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.492417 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.499938 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.527172 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.527207 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:54:06 crc kubenswrapper[4913]: E0121 06:54:06.527495 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.540054 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff633750-3c62-48b5-b977-f1b4f42b9b7e" path="/var/lib/kubelet/pods/ff633750-3c62-48b5-b977-f1b4f42b9b7e/volumes" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598121 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598158 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598175 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598200 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598320 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598462 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598499 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598528 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598554 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598571 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598659 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598716 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598765 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598829 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598880 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598928 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.598989 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.599029 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700300 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700338 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700364 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700389 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700416 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700459 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700477 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700496 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700511 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700525 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700550 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700563 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700580 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700624 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700644 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700663 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700690 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700705 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.700845 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.701218 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702236 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702483 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702580 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.702666 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.704513 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.705199 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.705423 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.705708 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.706835 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.707276 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.709249 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.709343 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.719329 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"cinder-api-0\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.719909 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.729693 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"cinder-api-2\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.729843 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"cinder-api-1\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.789913 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.807985 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:06 crc kubenswrapper[4913]: I0121 06:54:06.824836 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.017532 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.058644 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerStarted","Data":"fc61141adedd9fabf2633556dfc1607679b4a48fb46bdb5f82cce9d946540273"} Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.072496 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:07 crc kubenswrapper[4913]: W0121 06:54:07.076438 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8bfe045_07fb_48c6_aa71_356c7934f35a.slice/crio-48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe WatchSource:0}: Error finding container 48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe: Status 404 returned error can't find the container with id 48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe Jan 21 06:54:07 crc kubenswrapper[4913]: I0121 06:54:07.112062 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:07 crc kubenswrapper[4913]: W0121 06:54:07.118496 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode39e587a_7f6c_49d5_a5a3_1fb01ee2e790.slice/crio-9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0 WatchSource:0}: Error finding container 9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0: Status 404 returned error can't find the container with id 9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0 Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.070968 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerStarted","Data":"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.071444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerStarted","Data":"9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.079577 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerStarted","Data":"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.079637 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerStarted","Data":"48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.082094 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerStarted","Data":"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea"} Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.322993 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323043 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323078 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323639 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:54:08 crc kubenswrapper[4913]: I0121 06:54:08.323684 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6" gracePeriod=600 Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095027 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6" exitCode=0 Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095101 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095747 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.095782 4913 scope.go:117] "RemoveContainer" containerID="e0c9de231b7b7faa5ada83afcec7341a724626bb49b10ceffe9951e2dc769908" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.103102 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerStarted","Data":"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.103237 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.105863 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerStarted","Data":"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.106036 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.108290 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerStarted","Data":"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3"} Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.108544 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.146454 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-1" podStartSLOduration=3.146430159 podStartE2EDuration="3.146430159s" podCreationTimestamp="2026-01-21 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:09.143038437 +0000 UTC m=+1138.939398110" watchObservedRunningTime="2026-01-21 06:54:09.146430159 +0000 UTC m=+1138.942789842" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.170433 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-2" podStartSLOduration=3.170408565 podStartE2EDuration="3.170408565s" podCreationTimestamp="2026-01-21 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:09.164968918 +0000 UTC m=+1138.961328671" watchObservedRunningTime="2026-01-21 06:54:09.170408565 +0000 UTC m=+1138.966768258" Jan 21 06:54:09 crc kubenswrapper[4913]: I0121 06:54:09.189148 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.189120258 podStartE2EDuration="3.189120258s" podCreationTimestamp="2026-01-21 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:09.180105146 +0000 UTC m=+1138.976464859" watchObservedRunningTime="2026-01-21 06:54:09.189120258 +0000 UTC m=+1138.985479951" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.527004 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.527782 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:54:18 crc kubenswrapper[4913]: E0121 06:54:18.528287 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 40s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(5c8afc51-5054-46d8-a16d-e07541ff4af7)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.649973 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.806747 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:18 crc kubenswrapper[4913]: I0121 06:54:18.858679 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.840083 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.840737 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" containerID="cri-o://e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.840759 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" containerID="cri-o://71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.847282 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.98:8776/healthcheck\": EOF" Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.850980 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.851277 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" containerID="cri-o://a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.851662 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" containerID="cri-o://88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" gracePeriod=30 Jan 21 06:54:19 crc kubenswrapper[4913]: I0121 06:54:19.863724 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.97:8776/healthcheck\": EOF" Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.218089 4913 generic.go:334] "Generic (PLEG): container finished" podID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" exitCode=143 Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.218159 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerDied","Data":"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a"} Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.219627 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" exitCode=143 Jan 21 06:54:20 crc kubenswrapper[4913]: I0121 06:54:20.219651 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerDied","Data":"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d"} Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.343193 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-2" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.98:8776/healthcheck\": read tcp 10.217.0.2:36410->10.217.0.98:8776: read: connection reset by peer" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.351744 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-1" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.97:8776/healthcheck\": read tcp 10.217.0.2:37910->10.217.0.97:8776: read: connection reset by peer" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.630137 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.702525 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792440 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792498 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792529 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792570 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792651 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792669 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792727 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792746 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792766 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") pod \"a8bfe045-07fb-48c6-aa71-356c7934f35a\" (UID: \"a8bfe045-07fb-48c6-aa71-356c7934f35a\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792786 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792808 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.792826 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") pod \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\" (UID: \"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790\") " Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.794569 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798153 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9" (OuterVolumeSpecName: "kube-api-access-4mrl9") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "kube-api-access-4mrl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798310 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798569 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs" (OuterVolumeSpecName: "logs") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798622 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts" (OuterVolumeSpecName: "scripts") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.798723 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.799042 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs" (OuterVolumeSpecName: "logs") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.804293 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh" (OuterVolumeSpecName: "kube-api-access-47hrh") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "kube-api-access-47hrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.804315 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.805741 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts" (OuterVolumeSpecName: "scripts") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.832448 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data" (OuterVolumeSpecName: "config-data") pod "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" (UID: "e39e587a-7f6c-49d5-a5a3-1fb01ee2e790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.842073 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data" (OuterVolumeSpecName: "config-data") pod "a8bfe045-07fb-48c6-aa71-356c7934f35a" (UID: "a8bfe045-07fb-48c6-aa71-356c7934f35a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894233 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47hrh\" (UniqueName: \"kubernetes.io/projected/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-kube-api-access-47hrh\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894271 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894283 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mrl9\" (UniqueName: \"kubernetes.io/projected/a8bfe045-07fb-48c6-aa71-356c7934f35a-kube-api-access-4mrl9\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894293 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894304 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894316 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894326 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894339 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8bfe045-07fb-48c6-aa71-356c7934f35a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894348 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a8bfe045-07fb-48c6-aa71-356c7934f35a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894356 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894363 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:24 crc kubenswrapper[4913]: I0121 06:54:24.894370 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8bfe045-07fb-48c6-aa71-356c7934f35a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.266936 4913 generic.go:334] "Generic (PLEG): container finished" podID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" exitCode=0 Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267047 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerDied","Data":"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267099 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-1" event={"ID":"a8bfe045-07fb-48c6-aa71-356c7934f35a","Type":"ContainerDied","Data":"48ae0363ed9907803635e0c3752ee8ae3f3386eea1528f6e74cb430f1b9f61fe"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267126 4913 scope.go:117] "RemoveContainer" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.267314 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-1" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272084 4913 generic.go:334] "Generic (PLEG): container finished" podID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" exitCode=0 Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272165 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerDied","Data":"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-2" event={"ID":"e39e587a-7f6c-49d5-a5a3-1fb01ee2e790","Type":"ContainerDied","Data":"9cc5bfe05dd4d574bc47f56ac894c3f8262e7a833558ae07ced88b1ea0a7bbf0"} Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.272311 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-2" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.289278 4913 scope.go:117] "RemoveContainer" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.312630 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.325168 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-1"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.336295 4913 scope.go:117] "RemoveContainer" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.336815 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff\": container with ID starting with 88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff not found: ID does not exist" containerID="88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.336872 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff"} err="failed to get container status \"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff\": rpc error: code = NotFound desc = could not find container \"88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff\": container with ID starting with 88383ff9c6d3b96e91d9a1f32257a4388fa1a5ad4a7927815d8ce8628800aaff not found: ID does not exist" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.336902 4913 scope.go:117] "RemoveContainer" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.337254 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d\": container with ID starting with a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d not found: ID does not exist" containerID="a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.337283 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d"} err="failed to get container status \"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d\": rpc error: code = NotFound desc = could not find container \"a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d\": container with ID starting with a39192c2b8192891ac3667dbd650c82a23826e455a319a466d3450427b92d88d not found: ID does not exist" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.337298 4913 scope.go:117] "RemoveContainer" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.347686 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.352996 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-2"] Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.356744 4913 scope.go:117] "RemoveContainer" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.406028 4913 scope.go:117] "RemoveContainer" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.406644 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3\": container with ID starting with 71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3 not found: ID does not exist" containerID="71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.406696 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3"} err="failed to get container status \"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3\": rpc error: code = NotFound desc = could not find container \"71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3\": container with ID starting with 71bfb3cc10c90cb607846ab95eceb7aaeb33d65d6ad3d23c4618caffd56b5db3 not found: ID does not exist" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.406730 4913 scope.go:117] "RemoveContainer" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" Jan 21 06:54:25 crc kubenswrapper[4913]: E0121 06:54:25.407091 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a\": container with ID starting with e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a not found: ID does not exist" containerID="e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a" Jan 21 06:54:25 crc kubenswrapper[4913]: I0121 06:54:25.407120 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a"} err="failed to get container status \"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a\": rpc error: code = NotFound desc = could not find container \"e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a\": container with ID starting with e41f97ae5ff2f37349036dc7375ba870e4fc7405d76af1ce39e7d978be9a5f8a not found: ID does not exist" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.240950 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.246644 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-5h9sh"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.279504 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.281205 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" containerID="cri-o://54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.281280 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" containerID="cri-o://7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.319151 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.347986 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.348363 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" containerID="cri-o://001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.348307 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" containerID="cri-o://503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.399677 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.399963 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" containerID="cri-o://7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.400366 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" containerID="cri-o://7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" gracePeriod=30 Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440145 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440449 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440463 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440471 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440477 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440492 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440498 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: E0121 06:54:26.440514 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440520 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440650 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440660 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440675 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.440687 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" containerName="cinder-api-log" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.441184 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.459554 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.524372 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.524450 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.565555 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431d7c8b-5c95-4534-8cba-dd55885fc5cb" path="/var/lib/kubelet/pods/431d7c8b-5c95-4534-8cba-dd55885fc5cb/volumes" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.566273 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8bfe045-07fb-48c6-aa71-356c7934f35a" path="/var/lib/kubelet/pods/a8bfe045-07fb-48c6-aa71-356c7934f35a/volumes" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.566854 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39e587a-7f6c-49d5-a5a3-1fb01ee2e790" path="/var/lib/kubelet/pods/e39e587a-7f6c-49d5-a5a3-1fb01ee2e790/volumes" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.627436 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.627565 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.628902 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.649490 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"cindera707-account-delete-6zmgw\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.768052 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.829186 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.933881 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.935441 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run" (OuterVolumeSpecName: "run") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937757 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937805 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937867 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937912 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.937957 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938003 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938033 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938059 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938107 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938163 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938200 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938222 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938249 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.938306 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") pod \"5c8afc51-5054-46d8-a16d-e07541ff4af7\" (UID: \"5c8afc51-5054-46d8-a16d-e07541ff4af7\") " Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939089 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939144 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939391 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939389 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.939653 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940333 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys" (OuterVolumeSpecName: "sys") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940381 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev" (OuterVolumeSpecName: "dev") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940410 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940654 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940675 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940689 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940701 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940712 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940722 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940732 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940743 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940754 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.940766 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c8afc51-5054-46d8-a16d-e07541ff4af7-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.945237 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.945270 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2" (OuterVolumeSpecName: "kube-api-access-92vr2") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "kube-api-access-92vr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:26 crc kubenswrapper[4913]: I0121 06:54:26.946975 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts" (OuterVolumeSpecName: "scripts") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.014371 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data" (OuterVolumeSpecName: "config-data") pod "5c8afc51-5054-46d8-a16d-e07541ff4af7" (UID: "5c8afc51-5054-46d8-a16d-e07541ff4af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043283 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043341 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043355 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c8afc51-5054-46d8-a16d-e07541ff4af7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.043375 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92vr2\" (UniqueName: \"kubernetes.io/projected/5c8afc51-5054-46d8-a16d-e07541ff4af7-kube-api-access-92vr2\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.052781 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:27 crc kubenswrapper[4913]: W0121 06:54:27.056885 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19a6b93f_5562_4cac_8cc0_5aefdf18537d.slice/crio-0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994 WatchSource:0}: Error finding container 0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994: Status 404 returned error can't find the container with id 0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.290046 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerStarted","Data":"e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.290532 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerStarted","Data":"0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.292302 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"5c8afc51-5054-46d8-a16d-e07541ff4af7","Type":"ContainerDied","Data":"0914a5d09062d8edcaa686a31a8df3698c5a4b6a35c39a1ab80f1c4c3487a465"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.292336 4913 scope.go:117] "RemoveContainer" containerID="684a67c42bcb941d0325ae38b27c2498ea35f488365cd0823a0bdeaf5e818e1e" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.292423 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.294085 4913 generic.go:334] "Generic (PLEG): container finished" podID="b256a85e-47c9-4195-9732-d58250fd3f42" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" exitCode=0 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.294137 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerDied","Data":"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.296361 4913 generic.go:334] "Generic (PLEG): container finished" podID="7667e048-a702-4a50-8e72-35d001e6a310" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" exitCode=0 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.296415 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerDied","Data":"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.298438 4913 generic.go:334] "Generic (PLEG): container finished" podID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" exitCode=143 Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.298472 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerDied","Data":"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea"} Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.312389 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" podStartSLOduration=1.3123552809999999 podStartE2EDuration="1.312355281s" podCreationTimestamp="2026-01-21 06:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:27.310434548 +0000 UTC m=+1157.106794221" watchObservedRunningTime="2026-01-21 06:54:27.312355281 +0000 UTC m=+1157.108714954" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.336153 4913 scope.go:117] "RemoveContainer" containerID="22fad939aaecf2ae9faaf7c71ba417480fcbee27520b719c30d57c59e6d58542" Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.336480 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:27 crc kubenswrapper[4913]: I0121 06:54:27.352700 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:28 crc kubenswrapper[4913]: I0121 06:54:28.307425 4913 generic.go:334] "Generic (PLEG): container finished" podID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerID="e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a" exitCode=0 Jan 21 06:54:28 crc kubenswrapper[4913]: I0121 06:54:28.307507 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerDied","Data":"e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a"} Jan 21 06:54:28 crc kubenswrapper[4913]: I0121 06:54:28.537186 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" path="/var/lib/kubelet/pods/5c8afc51-5054-46d8-a16d-e07541ff4af7/volumes" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.742532 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.845669 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.96:8776/healthcheck\": read tcp 10.217.0.2:47266->10.217.0.96:8776: read: connection reset by peer" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.893213 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") pod \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.893326 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") pod \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\" (UID: \"19a6b93f-5562-4cac-8cc0-5aefdf18537d\") " Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.894167 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19a6b93f-5562-4cac-8cc0-5aefdf18537d" (UID: "19a6b93f-5562-4cac-8cc0-5aefdf18537d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.899875 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82" (OuterVolumeSpecName: "kube-api-access-zwt82") pod "19a6b93f-5562-4cac-8cc0-5aefdf18537d" (UID: "19a6b93f-5562-4cac-8cc0-5aefdf18537d"). InnerVolumeSpecName "kube-api-access-zwt82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.994933 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwt82\" (UniqueName: \"kubernetes.io/projected/19a6b93f-5562-4cac-8cc0-5aefdf18537d-kube-api-access-zwt82\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:29 crc kubenswrapper[4913]: I0121 06:54:29.994975 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19a6b93f-5562-4cac-8cc0-5aefdf18537d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.014088 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096121 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096228 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096314 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096412 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096513 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") pod \"7667e048-a702-4a50-8e72-35d001e6a310\" (UID: \"7667e048-a702-4a50-8e72-35d001e6a310\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096522 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.096934 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7667e048-a702-4a50-8e72-35d001e6a310-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.100787 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts" (OuterVolumeSpecName: "scripts") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.101730 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.101849 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd" (OuterVolumeSpecName: "kube-api-access-4sjpd") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "kube-api-access-4sjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.157628 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data" (OuterVolumeSpecName: "config-data") pod "7667e048-a702-4a50-8e72-35d001e6a310" (UID: "7667e048-a702-4a50-8e72-35d001e6a310"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.180433 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200117 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sjpd\" (UniqueName: \"kubernetes.io/projected/7667e048-a702-4a50-8e72-35d001e6a310-kube-api-access-4sjpd\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200143 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200151 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.200160 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7667e048-a702-4a50-8e72-35d001e6a310-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301448 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301568 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301641 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301673 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301755 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301785 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") pod \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\" (UID: \"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.301786 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.302154 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs" (OuterVolumeSpecName: "logs") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.302384 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.302416 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.304824 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts" (OuterVolumeSpecName: "scripts") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.305664 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k" (OuterVolumeSpecName: "kube-api-access-2lx2k") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "kube-api-access-2lx2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.306354 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326553 4913 generic.go:334] "Generic (PLEG): container finished" podID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" exitCode=0 Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326691 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerDied","Data":"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326706 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326726 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12","Type":"ContainerDied","Data":"fc61141adedd9fabf2633556dfc1607679b4a48fb46bdb5f82cce9d946540273"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.326749 4913 scope.go:117] "RemoveContainer" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.331014 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.331011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cindera707-account-delete-6zmgw" event={"ID":"19a6b93f-5562-4cac-8cc0-5aefdf18537d","Type":"ContainerDied","Data":"0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.331069 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0efca9704c8230224e9803d8a0108f40a7fc8767fab93ff1e896b9bc79e4e994" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334192 4913 generic.go:334] "Generic (PLEG): container finished" podID="7667e048-a702-4a50-8e72-35d001e6a310" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" exitCode=0 Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334250 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerDied","Data":"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334287 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"7667e048-a702-4a50-8e72-35d001e6a310","Type":"ContainerDied","Data":"f981181d61a91c8c4ac8291fd1a2e334c01d5ff76df7a42189afca1df38c5352"} Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.334359 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.348288 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data" (OuterVolumeSpecName: "config-data") pod "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" (UID: "c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.372222 4913 scope.go:117] "RemoveContainer" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.389032 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.395068 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403350 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403373 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403382 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lx2k\" (UniqueName: \"kubernetes.io/projected/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-kube-api-access-2lx2k\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.403391 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.423070 4913 scope.go:117] "RemoveContainer" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.423682 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3\": container with ID starting with 7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3 not found: ID does not exist" containerID="7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.423719 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3"} err="failed to get container status \"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3\": rpc error: code = NotFound desc = could not find container \"7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3\": container with ID starting with 7876184bc6f25aca4d37e8e0dedf0c1213fe4a00f07f96ec28cd5f0c3bf4dbf3 not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.423745 4913 scope.go:117] "RemoveContainer" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.424148 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea\": container with ID starting with 7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea not found: ID does not exist" containerID="7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.424228 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea"} err="failed to get container status \"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea\": rpc error: code = NotFound desc = could not find container \"7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea\": container with ID starting with 7c31278153c07d862b6d54e1131233ba5ac6b76a58677203dc5eef7ad0a1b7ea not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.424297 4913 scope.go:117] "RemoveContainer" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.452457 4913 scope.go:117] "RemoveContainer" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.477721 4913 scope.go:117] "RemoveContainer" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.478854 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729\": container with ID starting with 7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729 not found: ID does not exist" containerID="7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.478915 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729"} err="failed to get container status \"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729\": rpc error: code = NotFound desc = could not find container \"7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729\": container with ID starting with 7a7dca4ec271973abf4a2cf61c5a0a73fb1e9ba567b3360c994825b50953d729 not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.478947 4913 scope.go:117] "RemoveContainer" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" Jan 21 06:54:30 crc kubenswrapper[4913]: E0121 06:54:30.479585 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482\": container with ID starting with 54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482 not found: ID does not exist" containerID="54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.479633 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482"} err="failed to get container status \"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482\": rpc error: code = NotFound desc = could not find container \"54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482\": container with ID starting with 54e8cf679cf1e38cb237efaa431fd2045ff7215471c1693a40d702911ed79482 not found: ID does not exist" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.538011 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7667e048-a702-4a50-8e72-35d001e6a310" path="/var/lib/kubelet/pods/7667e048-a702-4a50-8e72-35d001e6a310/volumes" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.650181 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.655866 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.675647 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808073 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808116 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808149 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808177 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808201 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808231 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808243 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808259 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808272 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808285 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808347 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808394 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808481 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") pod \"b256a85e-47c9-4195-9732-d58250fd3f42\" (UID: \"b256a85e-47c9-4195-9732-d58250fd3f42\") " Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808915 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808960 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.808987 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809033 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run" (OuterVolumeSpecName: "run") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809046 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809071 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809091 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev" (OuterVolumeSpecName: "dev") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809111 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.809199 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.810449 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys" (OuterVolumeSpecName: "sys") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.813170 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts" (OuterVolumeSpecName: "scripts") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.813170 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx" (OuterVolumeSpecName: "kube-api-access-n78xx") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "kube-api-access-n78xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.813878 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.896152 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data" (OuterVolumeSpecName: "config-data") pod "b256a85e-47c9-4195-9732-d58250fd3f42" (UID: "b256a85e-47c9-4195-9732-d58250fd3f42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909422 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909449 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909461 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909472 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n78xx\" (UniqueName: \"kubernetes.io/projected/b256a85e-47c9-4195-9732-d58250fd3f42-kube-api-access-n78xx\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909484 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909494 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909502 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909511 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909520 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909530 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909539 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909549 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909558 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b256a85e-47c9-4195-9732-d58250fd3f42-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:30 crc kubenswrapper[4913]: I0121 06:54:30.909568 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b256a85e-47c9-4195-9732-d58250fd3f42-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345750 4913 generic.go:334] "Generic (PLEG): container finished" podID="b256a85e-47c9-4195-9732-d58250fd3f42" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" exitCode=0 Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345842 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerDied","Data":"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282"} Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345882 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"b256a85e-47c9-4195-9732-d58250fd3f42","Type":"ContainerDied","Data":"8d90cbd4148caa98577b4dcd08673fc8b51996acf7d066816c4fd8aad0d83fcf"} Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.345912 4913 scope.go:117] "RemoveContainer" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.346098 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.380179 4913 scope.go:117] "RemoveContainer" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.392318 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.399977 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.421739 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.427071 4913 scope.go:117] "RemoveContainer" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.427583 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963\": container with ID starting with 001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963 not found: ID does not exist" containerID="001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.427701 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963"} err="failed to get container status \"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963\": rpc error: code = NotFound desc = could not find container \"001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963\": container with ID starting with 001e9ead1d423ff2c2cd4ea318a158f0b44f372df6448781448f03adb987e963 not found: ID does not exist" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.427742 4913 scope.go:117] "RemoveContainer" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.428123 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282\": container with ID starting with 503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282 not found: ID does not exist" containerID="503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.428161 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282"} err="failed to get container status \"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282\": rpc error: code = NotFound desc = could not find container \"503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282\": container with ID starting with 503f433c4d217a1cb52ebe0748b526beb869c464c100457b7022ff9756539282 not found: ID does not exist" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.428309 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-88w4d"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.441583 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.449845 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.455935 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cindera707-account-delete-6zmgw"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.462684 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-a707-account-create-update-c2xg2"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.520707 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.520991 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521005 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521015 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521021 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521032 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521039 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521047 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521053 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521063 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521068 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521076 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521083 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521091 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521097 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521105 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521111 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521124 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521131 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521139 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521147 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521156 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerName="mariadb-account-delete" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521163 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerName="mariadb-account-delete" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521173 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521178 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.521252 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521259 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521365 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521373 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521382 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521388 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" containerName="mariadb-account-delete" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521396 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521404 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521411 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521419 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api-log" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521425 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="7667e048-a702-4a50-8e72-35d001e6a310" containerName="cinder-scheduler" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521435 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="cinder-backup" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521443 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521452 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521460 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" containerName="cinder-api" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.521909 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.528168 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.620783 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.620890 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.622665 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:54:31 crc kubenswrapper[4913]: E0121 06:54:31.623005 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623068 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623263 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="probe" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623333 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.623861 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.625509 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-db-secret" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.636690 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722080 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722209 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722309 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.722404 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.723772 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.743889 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"cinder-db-create-7hkmw\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.823844 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.823948 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.824686 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.834056 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.848322 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"cinder-b51d-account-create-update-d95q6\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:31 crc kubenswrapper[4913]: I0121 06:54:31.936887 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.256408 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.373262 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" event={"ID":"ed1e527b-217a-46b6-a907-0a6b589f7c4c","Type":"ContainerStarted","Data":"15597597f1dee6d759b1be14f9ed0008d47b913d4ac8cedc150dd6f64a2d5caa"} Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.394681 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:54:32 crc kubenswrapper[4913]: W0121 06:54:32.408578 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef9f85f_61e3_44b8_8974_9ec0a1e4e359.slice/crio-d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7 WatchSource:0}: Error finding container d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7: Status 404 returned error can't find the container with id d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7 Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.536745 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a6b93f-5562-4cac-8cc0-5aefdf18537d" path="/var/lib/kubelet/pods/19a6b93f-5562-4cac-8cc0-5aefdf18537d/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.537480 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e5eed1-ff67-483b-808d-466413987e09" path="/var/lib/kubelet/pods/84e5eed1-ff67-483b-808d-466413987e09/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.538002 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b256a85e-47c9-4195-9732-d58250fd3f42" path="/var/lib/kubelet/pods/b256a85e-47c9-4195-9732-d58250fd3f42/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.539137 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12" path="/var/lib/kubelet/pods/c79c3f10-f0ef-4d7e-8be1-eeb8f13c2a12/volumes" Jan 21 06:54:32 crc kubenswrapper[4913]: I0121 06:54:32.539997 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de97e815-d3a3-4a3d-81e2-6054f65b82f0" path="/var/lib/kubelet/pods/de97e815-d3a3-4a3d-81e2-6054f65b82f0/volumes" Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.399965 4913 generic.go:334] "Generic (PLEG): container finished" podID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerID="f6cac025b126b4c0c411e321cbd1813b091d8ec15b54ab9a538e93d26406b363" exitCode=0 Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.401949 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" event={"ID":"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359","Type":"ContainerDied","Data":"f6cac025b126b4c0c411e321cbd1813b091d8ec15b54ab9a538e93d26406b363"} Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.402022 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" event={"ID":"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359","Type":"ContainerStarted","Data":"d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7"} Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.405011 4913 generic.go:334] "Generic (PLEG): container finished" podID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerID="9eca8ae0460adba99832950743728508ef374a3fcd006d3227af45af81c4c272" exitCode=0 Jan 21 06:54:33 crc kubenswrapper[4913]: I0121 06:54:33.405047 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" event={"ID":"ed1e527b-217a-46b6-a907-0a6b589f7c4c","Type":"ContainerDied","Data":"9eca8ae0460adba99832950743728508ef374a3fcd006d3227af45af81c4c272"} Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.832993 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.837158 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968022 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") pod \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968151 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") pod \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968172 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") pod \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\" (UID: \"ed1e527b-217a-46b6-a907-0a6b589f7c4c\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968204 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") pod \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\" (UID: \"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359\") " Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968916 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed1e527b-217a-46b6-a907-0a6b589f7c4c" (UID: "ed1e527b-217a-46b6-a907-0a6b589f7c4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.968942 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" (UID: "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.973213 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42" (OuterVolumeSpecName: "kube-api-access-jpt42") pod "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" (UID: "0ef9f85f-61e3-44b8-8974-9ec0a1e4e359"). InnerVolumeSpecName "kube-api-access-jpt42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:34 crc kubenswrapper[4913]: I0121 06:54:34.973269 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5" (OuterVolumeSpecName: "kube-api-access-hqvz5") pod "ed1e527b-217a-46b6-a907-0a6b589f7c4c" (UID: "ed1e527b-217a-46b6-a907-0a6b589f7c4c"). InnerVolumeSpecName "kube-api-access-hqvz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.069564 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.069884 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpt42\" (UniqueName: \"kubernetes.io/projected/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359-kube-api-access-jpt42\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.069977 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvz5\" (UniqueName: \"kubernetes.io/projected/ed1e527b-217a-46b6-a907-0a6b589f7c4c-kube-api-access-hqvz5\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.070069 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed1e527b-217a-46b6-a907-0a6b589f7c4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.422113 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" event={"ID":"ed1e527b-217a-46b6-a907-0a6b589f7c4c","Type":"ContainerDied","Data":"15597597f1dee6d759b1be14f9ed0008d47b913d4ac8cedc150dd6f64a2d5caa"} Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.422170 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15597597f1dee6d759b1be14f9ed0008d47b913d4ac8cedc150dd6f64a2d5caa" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.422242 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-create-7hkmw" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.426254 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" event={"ID":"0ef9f85f-61e3-44b8-8974-9ec0a1e4e359","Type":"ContainerDied","Data":"d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7"} Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.426292 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d795eb5e2fb7536c4ec6ed20d075a674f073863682bcf95fc17c30804b05d8b7" Jan 21 06:54:35 crc kubenswrapper[4913]: I0121 06:54:35.426572 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.861198 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:36 crc kubenswrapper[4913]: E0121 06:54:36.862155 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerName="mariadb-database-create" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862195 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerName="mariadb-database-create" Jan 21 06:54:36 crc kubenswrapper[4913]: E0121 06:54:36.862227 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerName="mariadb-account-create-update" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862244 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerName="mariadb-account-create-update" Jan 21 06:54:36 crc kubenswrapper[4913]: E0121 06:54:36.862264 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862281 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8afc51-5054-46d8-a16d-e07541ff4af7" containerName="cinder-volume" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862526 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" containerName="mariadb-account-create-update" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.862560 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" containerName="mariadb-database-create" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.863722 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.866236 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-bdjrj" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.867055 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.867461 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.868235 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.882959 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999362 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999487 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999516 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:36 crc kubenswrapper[4913]: I0121 06:54:36.999698 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:36.999933 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:36.999997 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.100924 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.100978 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101045 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101074 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101130 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101260 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.101296 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.108229 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.110686 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.112636 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.112785 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.143486 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"cinder-db-sync-dslh8\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.191259 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:37 crc kubenswrapper[4913]: I0121 06:54:37.696308 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:37 crc kubenswrapper[4913]: W0121 06:54:37.718745 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5f0576_b10d_48d0_9017_4e24b85a1968.slice/crio-72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d WatchSource:0}: Error finding container 72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d: Status 404 returned error can't find the container with id 72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d Jan 21 06:54:38 crc kubenswrapper[4913]: I0121 06:54:38.456555 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerStarted","Data":"2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1"} Jan 21 06:54:38 crc kubenswrapper[4913]: I0121 06:54:38.457013 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerStarted","Data":"72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d"} Jan 21 06:54:40 crc kubenswrapper[4913]: I0121 06:54:40.476226 4913 generic.go:334] "Generic (PLEG): container finished" podID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerID="2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1" exitCode=0 Jan 21 06:54:40 crc kubenswrapper[4913]: I0121 06:54:40.476317 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerDied","Data":"2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1"} Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.790968 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879238 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879355 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879393 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879470 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879570 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879777 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.879851 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") pod \"2a5f0576-b10d-48d0-9017-4e24b85a1968\" (UID: \"2a5f0576-b10d-48d0-9017-4e24b85a1968\") " Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.883212 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5f0576-b10d-48d0-9017-4e24b85a1968-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.887792 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts" (OuterVolumeSpecName: "scripts") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.887922 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.887820 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg" (OuterVolumeSpecName: "kube-api-access-nngsg") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "kube-api-access-nngsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.917820 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.926443 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data" (OuterVolumeSpecName: "config-data") pod "2a5f0576-b10d-48d0-9017-4e24b85a1968" (UID: "2a5f0576-b10d-48d0-9017-4e24b85a1968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984431 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984470 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984485 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nngsg\" (UniqueName: \"kubernetes.io/projected/2a5f0576-b10d-48d0-9017-4e24b85a1968-kube-api-access-nngsg\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984499 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:41 crc kubenswrapper[4913]: I0121 06:54:41.984510 4913 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2a5f0576-b10d-48d0-9017-4e24b85a1968-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.502066 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" event={"ID":"2a5f0576-b10d-48d0-9017-4e24b85a1968","Type":"ContainerDied","Data":"72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d"} Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.502124 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-db-sync-dslh8" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.502129 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72171fed0172f0ad5ff3978ba57d29cced10c48e44a8b9f2225ffbee9421c79d" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.793245 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: E0121 06:54:42.793468 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerName="cinder-db-sync" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.793480 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerName="cinder-db-sync" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.793624 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" containerName="cinder-db-sync" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.794216 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.796977 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797205 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scheduler-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797329 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-cinder-dockercfg-bdjrj" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797417 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-scripts" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.797515 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"combined-ca-bundle" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.810679 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.811694 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.814061 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-backup-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.820831 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.873805 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.875061 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.879888 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-volume-volume1-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.894970 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.898871 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.899851 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.899980 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900091 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900254 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900387 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900499 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900657 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900798 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.900913 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901018 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901141 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901242 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901359 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901467 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901571 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901787 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.901942 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.902053 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.902182 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.902285 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.910687 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.962507 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.967040 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.969753 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cinder-api-config-data" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.969851 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-public-svc" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.969988 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"cert-cinder-internal-svc" Jan 21 06:54:42 crc kubenswrapper[4913]: I0121 06:54:42.977651 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003162 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003205 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003228 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003251 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003271 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003286 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003311 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003332 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003348 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003362 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003380 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003401 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003415 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003431 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003449 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003475 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003493 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003511 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003526 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003544 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003563 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003579 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003610 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003630 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003645 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003661 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003680 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003694 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003726 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003741 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003758 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003771 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003789 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003807 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003824 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003842 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.003911 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.004559 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.004697 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005173 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005329 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005582 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005627 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005645 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005666 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005680 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.005963 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.008194 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.009148 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.009192 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.020705 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.022163 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.025524 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.026050 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.026575 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"cinder-scheduler-0\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.029087 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.029612 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"cinder-backup-0\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104783 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104824 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104850 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104882 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104899 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104925 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104962 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.104985 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105009 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105045 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105065 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105086 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105111 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105135 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105158 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105172 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105186 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105203 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105216 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105234 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105254 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105270 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105285 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105307 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105330 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105349 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105525 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105816 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105860 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105916 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105951 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.105993 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.106099 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.106508 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.107649 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.107957 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.109963 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.120491 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.123212 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.165493 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.181270 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.200943 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206138 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206181 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206201 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206226 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206251 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206275 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206294 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206316 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206330 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.206998 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.207556 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.210135 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.210232 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.210382 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.211944 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.212235 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.214060 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.227396 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"cinder-api-0\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.289431 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.435119 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: W0121 06:54:43.493372 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea9195d_d908_4239_a57b_6783d75b959c.slice/crio-35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f WatchSource:0}: Error finding container 35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f: Status 404 returned error can't find the container with id 35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.498342 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.531836 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerStarted","Data":"fb558be5bdd8427975385c1ca42849d170194d638784eeb487c234d00654676d"} Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.537009 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerStarted","Data":"35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f"} Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.570096 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: W0121 06:54:43.571734 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9b3c88_2566_48cb_8f74_d1976b0e6bd1.slice/crio-ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8 WatchSource:0}: Error finding container ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8: Status 404 returned error can't find the container with id ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8 Jan 21 06:54:43 crc kubenswrapper[4913]: I0121 06:54:43.800415 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:43 crc kubenswrapper[4913]: W0121 06:54:43.820272 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2458ee8d_8802_4047_a9fe_d077f2d2450d.slice/crio-449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb WatchSource:0}: Error finding container 449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb: Status 404 returned error can't find the container with id 449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.556109 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerStarted","Data":"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.556730 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerStarted","Data":"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.561424 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerStarted","Data":"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.561471 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerStarted","Data":"449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.563860 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerStarted","Data":"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.563915 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerStarted","Data":"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.566031 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.566066 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.566080 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8"} Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.583411 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-scheduler-0" podStartSLOduration=2.583388707 podStartE2EDuration="2.583388707s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:44.57722078 +0000 UTC m=+1174.373580493" watchObservedRunningTime="2026-01-21 06:54:44.583388707 +0000 UTC m=+1174.379748400" Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.619872 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podStartSLOduration=2.6198504209999998 podStartE2EDuration="2.619850421s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:44.611906986 +0000 UTC m=+1174.408266689" watchObservedRunningTime="2026-01-21 06:54:44.619850421 +0000 UTC m=+1174.416210094" Jan 21 06:54:44 crc kubenswrapper[4913]: I0121 06:54:44.642667 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-backup-0" podStartSLOduration=2.642646727 podStartE2EDuration="2.642646727s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:44.635228767 +0000 UTC m=+1174.431588460" watchObservedRunningTime="2026-01-21 06:54:44.642646727 +0000 UTC m=+1174.439006410" Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.575401 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerStarted","Data":"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616"} Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.575934 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.577419 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2" exitCode=1 Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.577472 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2"} Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.577713 4913 scope.go:117] "RemoveContainer" containerID="c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2" Jan 21 06:54:45 crc kubenswrapper[4913]: I0121 06:54:45.608512 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cinder-kuttl-tests/cinder-api-0" podStartSLOduration=3.608484712 podStartE2EDuration="3.608484712s" podCreationTimestamp="2026-01-21 06:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 06:54:45.601955315 +0000 UTC m=+1175.398315008" watchObservedRunningTime="2026-01-21 06:54:45.608484712 +0000 UTC m=+1175.404844425" Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.588311 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81" exitCode=1 Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.588371 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81"} Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.590452 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063"} Jan 21 06:54:46 crc kubenswrapper[4913]: I0121 06:54:46.589003 4913 scope.go:117] "RemoveContainer" containerID="d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81" Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600355 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" exitCode=1 Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600435 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063"} Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600689 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerStarted","Data":"86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c"} Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.600714 4913 scope.go:117] "RemoveContainer" containerID="c2e2e7182326cbdc5416fdcee7c1e31efc131ab97f15b28b9e0e8e3cc245fbd2" Jan 21 06:54:47 crc kubenswrapper[4913]: I0121 06:54:47.601151 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:47 crc kubenswrapper[4913]: E0121 06:54:47.601467 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.166055 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.181788 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.201936 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.373900 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.417521 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.609831 4913 generic.go:334] "Generic (PLEG): container finished" podID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" exitCode=1 Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.609909 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c"} Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.610309 4913 scope.go:117] "RemoveContainer" containerID="d3346e732c277c48153e7fb1c2981c00e554a376f27a8a996341aeec5550df81" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.610709 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:48 crc kubenswrapper[4913]: I0121 06:54:48.610767 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:48 crc kubenswrapper[4913]: E0121 06:54:48.611191 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:49 crc kubenswrapper[4913]: I0121 06:54:49.201858 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:49 crc kubenswrapper[4913]: I0121 06:54:49.624945 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:49 crc kubenswrapper[4913]: I0121 06:54:49.625003 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:49 crc kubenswrapper[4913]: E0121 06:54:49.626487 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:50 crc kubenswrapper[4913]: I0121 06:54:50.637247 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:50 crc kubenswrapper[4913]: I0121 06:54:50.637302 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:50 crc kubenswrapper[4913]: E0121 06:54:50.637741 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:53 crc kubenswrapper[4913]: I0121 06:54:53.201852 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:53 crc kubenswrapper[4913]: I0121 06:54:53.203500 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 06:54:53 crc kubenswrapper[4913]: I0121 06:54:53.203526 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 06:54:53 crc kubenswrapper[4913]: E0121 06:54:53.204061 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-volume\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-volume pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\", failed to \"StartContainer\" for \"probe\" with CrashLoopBackOff: \"back-off 10s restarting failed container=probe pod=cinder-volume-volume1-0_cinder-kuttl-tests(6e9b3c88-2566-48cb-8f74-d1976b0e6bd1)\"]" pod="cinder-kuttl-tests/cinder-volume-volume1-0" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" Jan 21 06:54:55 crc kubenswrapper[4913]: I0121 06:54:55.338675 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.323336 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.335172 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-sync-dslh8"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.347116 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.347620 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" containerID="cri-o://fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.347973 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-backup-0" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" containerID="cri-o://c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.356945 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.357236 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" containerID="cri-o://ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.357387 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-scheduler-0" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" containerID="cri-o://858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.417207 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.425903 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.426645 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.433694 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.434179 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" containerID="cri-o://93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.434292 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" containerID="cri-o://f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" gracePeriod=30 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.439854 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.106:8776/healthcheck\": EOF" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.440655 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.535509 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5f0576-b10d-48d0-9017-4e24b85a1968" path="/var/lib/kubelet/pods/2a5f0576-b10d-48d0-9017-4e24b85a1968/volumes" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.608710 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.608763 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.688785 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-volume-volume1-0" event={"ID":"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1","Type":"ContainerDied","Data":"ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8"} Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.688860 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca23a3b5807f0a82e3e76688102e82372907414afed6a61d5f49fd37d64d6ad8" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.691920 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerDied","Data":"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2"} Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.691864 4913 generic.go:334] "Generic (PLEG): container finished" podID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" exitCode=143 Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.710432 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.710484 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.711346 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.734582 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.734330 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"cinderb51d-account-delete-jvhc7\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.739672 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912770 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912828 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912852 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912884 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912907 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912931 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912971 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.912988 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913006 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913028 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913048 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913083 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913107 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913121 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") pod \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\" (UID: \"6e9b3c88-2566-48cb-8f74-d1976b0e6bd1\") " Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913420 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913758 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913829 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run" (OuterVolumeSpecName: "run") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913852 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913967 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.913998 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914015 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev" (OuterVolumeSpecName: "dev") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914711 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys" (OuterVolumeSpecName: "sys") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914784 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.914812 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.918687 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts" (OuterVolumeSpecName: "scripts") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.918766 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km" (OuterVolumeSpecName: "kube-api-access-2p2km") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "kube-api-access-2p2km". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.924900 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.953374 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:56 crc kubenswrapper[4913]: I0121 06:54:56.988683 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data" (OuterVolumeSpecName: "config-data") pod "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" (UID: "6e9b3c88-2566-48cb-8f74-d1976b0e6bd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014797 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014836 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014849 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014863 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014876 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014887 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014897 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014908 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014919 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014930 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014943 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014953 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p2km\" (UniqueName: \"kubernetes.io/projected/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-kube-api-access-2p2km\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014968 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014979 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.014990 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.208432 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:54:57 crc kubenswrapper[4913]: W0121 06:54:57.213571 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod583f045f_efa0_4df8_8b2f_b9699740fc92.slice/crio-b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699 WatchSource:0}: Error finding container b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699: Status 404 returned error can't find the container with id b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.700253 4913 generic.go:334] "Generic (PLEG): container finished" podID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" exitCode=0 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.700307 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerDied","Data":"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.702213 4913 generic.go:334] "Generic (PLEG): container finished" podID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerID="432a97f97a1e748db15a8c859dcaa7de8838a131f61c83acbb060114eb9ecddf" exitCode=0 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.702280 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" event={"ID":"583f045f-efa0-4df8-8b2f-b9699740fc92","Type":"ContainerDied","Data":"432a97f97a1e748db15a8c859dcaa7de8838a131f61c83acbb060114eb9ecddf"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.702314 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" event={"ID":"583f045f-efa0-4df8-8b2f-b9699740fc92","Type":"ContainerStarted","Data":"b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.704304 4913 generic.go:334] "Generic (PLEG): container finished" podID="bea9195d-d908-4239-a57b-6783d75b959c" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" exitCode=0 Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.704365 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerDied","Data":"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7"} Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.704392 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-volume-volume1-0" Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.751732 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:57 crc kubenswrapper[4913]: I0121 06:54:57.756777 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-volume-volume1-0"] Jan 21 06:54:58 crc kubenswrapper[4913]: I0121 06:54:58.536002 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" path="/var/lib/kubelet/pods/6e9b3c88-2566-48cb-8f74-d1976b0e6bd1/volumes" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.059916 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.242254 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") pod \"583f045f-efa0-4df8-8b2f-b9699740fc92\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.242385 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") pod \"583f045f-efa0-4df8-8b2f-b9699740fc92\" (UID: \"583f045f-efa0-4df8-8b2f-b9699740fc92\") " Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.243136 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "583f045f-efa0-4df8-8b2f-b9699740fc92" (UID: "583f045f-efa0-4df8-8b2f-b9699740fc92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.249376 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n" (OuterVolumeSpecName: "kube-api-access-2tv2n") pod "583f045f-efa0-4df8-8b2f-b9699740fc92" (UID: "583f045f-efa0-4df8-8b2f-b9699740fc92"). InnerVolumeSpecName "kube-api-access-2tv2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.344322 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/583f045f-efa0-4df8-8b2f-b9699740fc92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.344356 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tv2n\" (UniqueName: \"kubernetes.io/projected/583f045f-efa0-4df8-8b2f-b9699740fc92-kube-api-access-2tv2n\") on node \"crc\" DevicePath \"\"" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.725685 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" event={"ID":"583f045f-efa0-4df8-8b2f-b9699740fc92","Type":"ContainerDied","Data":"b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699"} Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.725724 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a21be51c33eccb9a835b1192dd4d6581ef5aade532ca42259db8cc924d1699" Jan 21 06:54:59 crc kubenswrapper[4913]: I0121 06:54:59.725764 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinderb51d-account-delete-jvhc7" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.645829 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.655474 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.736649 4913 generic.go:334] "Generic (PLEG): container finished" podID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" exitCode=0 Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.736866 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-scheduler-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.737103 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerDied","Data":"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.737260 4913 scope.go:117] "RemoveContainer" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.737822 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-scheduler-0" event={"ID":"a4203da3-d347-42bb-8e9b-6bdbf250c4eb","Type":"ContainerDied","Data":"fb558be5bdd8427975385c1ca42849d170194d638784eeb487c234d00654676d"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742194 4913 generic.go:334] "Generic (PLEG): container finished" podID="bea9195d-d908-4239-a57b-6783d75b959c" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" exitCode=0 Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742252 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerDied","Data":"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742291 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-backup-0" event={"ID":"bea9195d-d908-4239-a57b-6783d75b959c","Type":"ContainerDied","Data":"35509a2f25aab9a046cebd79f090838505c9ff025794493b470b2c34e2190d6f"} Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.742400 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-backup-0" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.762945 4913 scope.go:117] "RemoveContainer" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776785 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776838 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776892 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776901 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run" (OuterVolumeSpecName: "run") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776934 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776972 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777001 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777035 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777058 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777084 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777119 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777186 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777209 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777230 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777264 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777296 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") pod \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\" (UID: \"a4203da3-d347-42bb-8e9b-6bdbf250c4eb\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777364 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777394 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777425 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777450 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777475 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") pod \"bea9195d-d908-4239-a57b-6783d75b959c\" (UID: \"bea9195d-d908-4239-a57b-6783d75b959c\") " Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777854 4913 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-run\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.776970 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev" (OuterVolumeSpecName: "dev") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777002 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778108 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778188 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778294 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778576 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys" (OuterVolumeSpecName: "sys") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.777997 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778701 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778893 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.778906 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783115 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw" (OuterVolumeSpecName: "kube-api-access-lndcw") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "kube-api-access-lndcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783317 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783323 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p" (OuterVolumeSpecName: "kube-api-access-lv65p") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "kube-api-access-lv65p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783500 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts" (OuterVolumeSpecName: "scripts") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.783753 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.784868 4913 scope.go:117] "RemoveContainer" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.785008 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts" (OuterVolumeSpecName: "scripts") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.785337 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67\": container with ID starting with 858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67 not found: ID does not exist" containerID="858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.785364 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67"} err="failed to get container status \"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67\": rpc error: code = NotFound desc = could not find container \"858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67\": container with ID starting with 858f6995d0d59c826519abc4b8506eee63e5f80aefb291391d661ea9952b4a67 not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.785382 4913 scope.go:117] "RemoveContainer" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.785969 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e\": container with ID starting with ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e not found: ID does not exist" containerID="ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.786004 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e"} err="failed to get container status \"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e\": rpc error: code = NotFound desc = could not find container \"ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e\": container with ID starting with ca75c8fca020b217d89ac150d2bc874c877761992c02888b8d2fc08d57ec804e not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.786025 4913 scope.go:117] "RemoveContainer" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.807080 4913 scope.go:117] "RemoveContainer" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.825366 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.835517 4913 scope.go:117] "RemoveContainer" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.837751 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7\": container with ID starting with fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7 not found: ID does not exist" containerID="fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.837820 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7"} err="failed to get container status \"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7\": rpc error: code = NotFound desc = could not find container \"fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7\": container with ID starting with fcd7ba5ffdaa88c1cf915859ba91fe0c1306be78d8a292bb182bbe7139785ab7 not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.837859 4913 scope.go:117] "RemoveContainer" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" Jan 21 06:55:00 crc kubenswrapper[4913]: E0121 06:55:00.838371 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b\": container with ID starting with c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b not found: ID does not exist" containerID="c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.838443 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b"} err="failed to get container status \"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b\": rpc error: code = NotFound desc = could not find container \"c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b\": container with ID starting with c1143280666882b924f78c0ca9f8353f55025052dfa042c5b87cef40b7e9e53b not found: ID does not exist" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.841898 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.845652 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/cinder-api-0" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.106:8776/healthcheck\": read tcp 10.217.0.2:49308->10.217.0.106:8776: read: connection reset by peer" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.852756 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data" (OuterVolumeSpecName: "config-data") pod "bea9195d-d908-4239-a57b-6783d75b959c" (UID: "bea9195d-d908-4239-a57b-6783d75b959c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.856199 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data" (OuterVolumeSpecName: "config-data") pod "a4203da3-d347-42bb-8e9b-6bdbf250c4eb" (UID: "a4203da3-d347-42bb-8e9b-6bdbf250c4eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879234 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879257 4913 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879266 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879277 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879286 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879293 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879301 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879309 4913 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879317 4913 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-dev\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879324 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879332 4913 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879340 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bea9195d-d908-4239-a57b-6783d75b959c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879348 4913 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879356 4913 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879363 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879371 4913 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-sys\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879378 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv65p\" (UniqueName: \"kubernetes.io/projected/bea9195d-d908-4239-a57b-6783d75b959c-kube-api-access-lv65p\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879386 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bea9195d-d908-4239-a57b-6783d75b959c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879417 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lndcw\" (UniqueName: \"kubernetes.io/projected/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-kube-api-access-lndcw\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:00 crc kubenswrapper[4913]: I0121 06:55:00.879425 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4203da3-d347-42bb-8e9b-6bdbf250c4eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.102749 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.111797 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-scheduler-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.121974 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.134223 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-backup-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.372402 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.438737 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.446511 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-db-create-7hkmw"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.455835 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.462237 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.466745 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-b51d-account-create-update-d95q6"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.478418 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinderb51d-account-delete-jvhc7"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.495970 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496013 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496044 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496072 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496112 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496144 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496162 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496217 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496235 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") pod \"2458ee8d-8802-4047-a9fe-d077f2d2450d\" (UID: \"2458ee8d-8802-4047-a9fe-d077f2d2450d\") " Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.496559 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.497359 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs" (OuterVolumeSpecName: "logs") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.500215 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts" (OuterVolumeSpecName: "scripts") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.500929 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf" (OuterVolumeSpecName: "kube-api-access-m5bkf") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "kube-api-access-m5bkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.501129 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.513962 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.531900 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.542721 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data" (OuterVolumeSpecName: "config-data") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.545106 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2458ee8d-8802-4047-a9fe-d077f2d2450d" (UID: "2458ee8d-8802-4047-a9fe-d077f2d2450d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597293 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597436 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597516 4913 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2458ee8d-8802-4047-a9fe-d077f2d2450d-logs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597582 4913 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597645 4913 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.597691 4913 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.598154 4913 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2458ee8d-8802-4047-a9fe-d077f2d2450d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.598179 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2458ee8d-8802-4047-a9fe-d077f2d2450d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.598189 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5bkf\" (UniqueName: \"kubernetes.io/projected/2458ee8d-8802-4047-a9fe-d077f2d2450d-kube-api-access-m5bkf\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751649 4913 generic.go:334] "Generic (PLEG): container finished" podID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" exitCode=0 Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751751 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/cinder-api-0" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751774 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerDied","Data":"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616"} Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751844 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/cinder-api-0" event={"ID":"2458ee8d-8802-4047-a9fe-d077f2d2450d","Type":"ContainerDied","Data":"449324410dac3de75a4a4476507668ac6f203fc50d5a4fe75ff92301e132c9eb"} Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.751874 4913 scope.go:117] "RemoveContainer" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.786020 4913 scope.go:117] "RemoveContainer" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.800776 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.810414 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/cinder-api-0"] Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.815574 4913 scope.go:117] "RemoveContainer" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" Jan 21 06:55:01 crc kubenswrapper[4913]: E0121 06:55:01.816417 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616\": container with ID starting with f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616 not found: ID does not exist" containerID="f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.816467 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616"} err="failed to get container status \"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616\": rpc error: code = NotFound desc = could not find container \"f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616\": container with ID starting with f38337747e36682475622e750841856d6c175cb99cf327970e273a48e311d616 not found: ID does not exist" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.816506 4913 scope.go:117] "RemoveContainer" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" Jan 21 06:55:01 crc kubenswrapper[4913]: E0121 06:55:01.817245 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2\": container with ID starting with 93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2 not found: ID does not exist" containerID="93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2" Jan 21 06:55:01 crc kubenswrapper[4913]: I0121 06:55:01.817326 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2"} err="failed to get container status \"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2\": rpc error: code = NotFound desc = could not find container \"93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2\": container with ID starting with 93855114c2214afe568710eb70620dabef8e079dd4ea59c5f39d8e8951e90ed2 not found: ID does not exist" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.540831 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef9f85f-61e3-44b8-8974-9ec0a1e4e359" path="/var/lib/kubelet/pods/0ef9f85f-61e3-44b8-8974-9ec0a1e4e359/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.542653 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" path="/var/lib/kubelet/pods/2458ee8d-8802-4047-a9fe-d077f2d2450d/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.544108 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" path="/var/lib/kubelet/pods/583f045f-efa0-4df8-8b2f-b9699740fc92/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.546574 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" path="/var/lib/kubelet/pods/a4203da3-d347-42bb-8e9b-6bdbf250c4eb/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.548167 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea9195d-d908-4239-a57b-6783d75b959c" path="/var/lib/kubelet/pods/bea9195d-d908-4239-a57b-6783d75b959c/volumes" Jan 21 06:55:02 crc kubenswrapper[4913]: I0121 06:55:02.549646 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1e527b-217a-46b6-a907-0a6b589f7c4c" path="/var/lib/kubelet/pods/ed1e527b-217a-46b6-a907-0a6b589f7c4c/volumes" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.437706 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.445317 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.457299 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-bootstrap-p8tbb"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.462652 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-sync-dd79k"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478493 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478802 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478819 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478829 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478836 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478848 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478854 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478864 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478870 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478879 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478886 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478896 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478902 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478912 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478917 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478926 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478931 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478940 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerName="mariadb-account-delete" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478945 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerName="mariadb-account-delete" Jan 21 06:55:03 crc kubenswrapper[4913]: E0121 06:55:03.478960 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.478966 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479065 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api-log" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479078 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479086 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479097 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479104 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="probe" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479112 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2458ee8d-8802-4047-a9fe-d077f2d2450d" containerName="cinder-api" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479119 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea9195d-d908-4239-a57b-6783d75b959c" containerName="cinder-backup" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479125 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479134 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="583f045f-efa0-4df8-8b2f-b9699740fc92" containerName="mariadb-account-delete" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479141 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4203da3-d347-42bb-8e9b-6bdbf250c4eb" containerName="cinder-scheduler" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479148 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="cinder-volume" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.479635 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.482495 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.524745 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.525029 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" containerID="cri-o://d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588" gracePeriod=30 Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.531495 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.531565 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.632560 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.632662 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.633333 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.666977 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"keystonec9ef-account-delete-g88bp\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:03 crc kubenswrapper[4913]: I0121 06:55:03.795512 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.186289 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.221082 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-thtbk"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.228557 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.229010 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.229030 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9b3c88-2566-48cb-8f74-d1976b0e6bd1" containerName="probe" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.229805 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.233450 4913 reflector.go:368] Caches populated for *v1.Secret from object-"cinder-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.238218 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.244048 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.253222 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.260141 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.262053 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-nltwk operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" podUID="91b33253-3e4a-44e4-9354-e293f4758d78" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.268957 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.287760 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.342666 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.342750 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.399825 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-2" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" containerID="cri-o://b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" gracePeriod=30 Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.444366 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.444523 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.444685 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.444781 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:04.944759329 +0000 UTC m=+1194.741119002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.448845 4913 projected.go:194] Error preparing data for projected volume kube-api-access-nltwk for pod cinder-kuttl-tests/root-account-create-update-f9cwc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.448913 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:04.948896611 +0000 UTC m=+1194.745256284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nltwk" (UniqueName: "kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.534261 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09733cef-ac9b-4a13-92a5-4b416079180f" path="/var/lib/kubelet/pods/09733cef-ac9b-4a13-92a5-4b416079180f/volumes" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.535188 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345b0465-d6ca-45e5-bd9d-47a6adacb366" path="/var/lib/kubelet/pods/345b0465-d6ca-45e5-bd9d-47a6adacb366/volumes" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.535896 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694c0c16-1814-40f7-b7a8-c6f4d7ee7a56" path="/var/lib/kubelet/pods/694c0c16-1814-40f7-b7a8-c6f4d7ee7a56/volumes" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790147 4913 generic.go:334] "Generic (PLEG): container finished" podID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerID="5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2" exitCode=1 Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790234 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790258 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerDied","Data":"5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2"} Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790341 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerStarted","Data":"337073b67ec506202df85dc5fd188199394795c2d0bc028de5810b4d603092f0"} Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790787 4913 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" secret="" err="secret \"galera-openstack-dockercfg-6gtwj\" not found" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.790845 4913 scope.go:117] "RemoveContainer" containerID="5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.807499 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.849925 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.849991 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.349977633 +0000 UTC m=+1195.146337306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.915470 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.915970 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/memcached-0" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" containerID="cri-o://9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" gracePeriod=30 Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.952381 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: I0121 06:55:04.953019 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.953330 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.953393 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.953374816 +0000 UTC m=+1195.749734499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : configmap "openstack-scripts" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.967218 4913 projected.go:194] Error preparing data for projected volume kube-api-access-nltwk for pod cinder-kuttl-tests/root-account-create-update-f9cwc: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:04 crc kubenswrapper[4913]: E0121 06:55:04.967349 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:05.967316702 +0000 UTC m=+1195.763676415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nltwk" (UniqueName: "kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.318655 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.359769 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.359858 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:06.359839183 +0000 UTC m=+1196.156198866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.420116 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461234 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461378 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461427 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461500 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461559 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.461634 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") pod \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\" (UID: \"06b1fd9b-951d-4d8e-8a08-4a2e8d820370\") " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462089 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462277 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462418 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.462452 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.463756 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.470852 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p" (OuterVolumeSpecName: "kube-api-access-2xh9p") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "kube-api-access-2xh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.475227 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "06b1fd9b-951d-4d8e-8a08-4a2e8d820370" (UID: "06b1fd9b-951d-4d8e-8a08-4a2e8d820370"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.563980 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564022 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564032 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564042 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xh9p\" (UniqueName: \"kubernetes.io/projected/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-kube-api-access-2xh9p\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.564053 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06b1fd9b-951d-4d8e-8a08-4a2e8d820370-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.576439 4913 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.665986 4913 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801162 4913 generic.go:334] "Generic (PLEG): container finished" podID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" exitCode=0 Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801242 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-2" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801287 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerDied","Data":"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa"} Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801382 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-2" event={"ID":"06b1fd9b-951d-4d8e-8a08-4a2e8d820370","Type":"ContainerDied","Data":"5f5d4f1ef26e68f7b2d31a9b3d84d0da1ff312a47ab5657edc54afc49f04f096"} Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.801410 4913 scope.go:117] "RemoveContainer" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.806267 4913 generic.go:334] "Generic (PLEG): container finished" podID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" exitCode=1 Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.806381 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.806747 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerDied","Data":"d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275"} Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.807154 4913 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" secret="" err="secret \"galera-openstack-dockercfg-6gtwj\" not found" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.807213 4913 scope.go:117] "RemoveContainer" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.807535 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystonec9ef-account-delete-g88bp_cinder-kuttl-tests(1bcf0783-d151-4d4d-ad95-5671ec458c85)\"" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.843010 4913 scope.go:117] "RemoveContainer" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.866157 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.871688 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-2"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.900541 4913 scope.go:117] "RemoveContainer" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.902550 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa\": container with ID starting with b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa not found: ID does not exist" containerID="b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.902648 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa"} err="failed to get container status \"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa\": rpc error: code = NotFound desc = could not find container \"b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa\": container with ID starting with b8133333ed048a1d272840dad93450d38741f320ed6ac06830e7093e695ac4aa not found: ID does not exist" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.902689 4913 scope.go:117] "RemoveContainer" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.903762 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49\": container with ID starting with 00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49 not found: ID does not exist" containerID="00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.903809 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49"} err="failed to get container status \"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49\": rpc error: code = NotFound desc = could not find container \"00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49\": container with ID starting with 00fb8f3c419c694119fa9b8b73e75a176b6d44f4d0a6d0ea42d32dda05e5cb49 not found: ID does not exist" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.903836 4913 scope.go:117] "RemoveContainer" containerID="5b19fca2e0415541325dffce9796df19298b62c4daa327fe5fec8ed05f6b1cd2" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.927184 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.931447 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.937687 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/root-account-create-update-f9cwc"] Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.972839 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" containerID="cri-o://f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" gracePeriod=604800 Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.972952 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:05 crc kubenswrapper[4913]: I0121 06:55:05.973043 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") pod \"root-account-create-update-f9cwc\" (UID: \"91b33253-3e4a-44e4-9354-e293f4758d78\") " pod="cinder-kuttl-tests/root-account-create-update-f9cwc" Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.973175 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.973261 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:07.97324214 +0000 UTC m=+1197.769601823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : configmap "openstack-scripts" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.974822 4913 projected.go:194] Error preparing data for projected volume kube-api-access-nltwk for pod cinder-kuttl-tests/root-account-create-update-f9cwc: failed to fetch token: pod "root-account-create-update-f9cwc" not found Jan 21 06:55:05 crc kubenswrapper[4913]: E0121 06:55:05.974896 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk podName:91b33253-3e4a-44e4-9354-e293f4758d78 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:07.974874483 +0000 UTC m=+1197.771234176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nltwk" (UniqueName: "kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk") pod "root-account-create-update-f9cwc" (UID: "91b33253-3e4a-44e4-9354-e293f4758d78") : failed to fetch token: pod "root-account-create-update-f9cwc" not found Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.074485 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91b33253-3e4a-44e4-9354-e293f4758d78-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.074532 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nltwk\" (UniqueName: \"kubernetes.io/projected/91b33253-3e4a-44e4-9354-e293f4758d78-kube-api-access-nltwk\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.319188 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379365 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") pod \"ac820b36-83fb-44ca-97b0-6181846a5ef3\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379452 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") pod \"ac820b36-83fb-44ca-97b0-6181846a5ef3\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379525 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") pod \"ac820b36-83fb-44ca-97b0-6181846a5ef3\" (UID: \"ac820b36-83fb-44ca-97b0-6181846a5ef3\") " Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.379934 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data" (OuterVolumeSpecName: "config-data") pod "ac820b36-83fb-44ca-97b0-6181846a5ef3" (UID: "ac820b36-83fb-44ca-97b0-6181846a5ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.380078 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.380103 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "ac820b36-83fb-44ca-97b0-6181846a5ef3" (UID: "ac820b36-83fb-44ca-97b0-6181846a5ef3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.380112 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.380130 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:08.380112237 +0000 UTC m=+1198.176471910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.384629 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747" (OuterVolumeSpecName: "kube-api-access-tx747") pod "ac820b36-83fb-44ca-97b0-6181846a5ef3" (UID: "ac820b36-83fb-44ca-97b0-6181846a5ef3"). InnerVolumeSpecName "kube-api-access-tx747". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.456674 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-1" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" containerID="cri-o://fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" gracePeriod=28 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.481236 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx747\" (UniqueName: \"kubernetes.io/projected/ac820b36-83fb-44ca-97b0-6181846a5ef3-kube-api-access-tx747\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.481279 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac820b36-83fb-44ca-97b0-6181846a5ef3-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.536906 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" path="/var/lib/kubelet/pods/06b1fd9b-951d-4d8e-8a08-4a2e8d820370/volumes" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.537787 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b33253-3e4a-44e4-9354-e293f4758d78" path="/var/lib/kubelet/pods/91b33253-3e4a-44e4-9354-e293f4758d78/volumes" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.827452 4913 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" secret="" err="secret \"galera-openstack-dockercfg-6gtwj\" not found" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.827501 4913 scope.go:117] "RemoveContainer" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.827718 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystonec9ef-account-delete-g88bp_cinder-kuttl-tests(1bcf0783-d151-4d4d-ad95-5671ec458c85)\"" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.830772 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.831019 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" containerID="cri-o://19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" gracePeriod=10 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.851475 4913 generic.go:334] "Generic (PLEG): container finished" podID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerID="d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588" exitCode=0 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.851619 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerDied","Data":"d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588"} Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860128 4913 generic.go:334] "Generic (PLEG): container finished" podID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" exitCode=0 Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860185 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerDied","Data":"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3"} Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860208 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/memcached-0" event={"ID":"ac820b36-83fb-44ca-97b0-6181846a5ef3","Type":"ContainerDied","Data":"7628ff84dc7d5ccc58494fe2504418c556a22a9bf641ef4e437400ce6be05e92"} Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860223 4913 scope.go:117] "RemoveContainer" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.860305 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/memcached-0" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.883077 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.892482 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/memcached-0"] Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.902797 4913 scope.go:117] "RemoveContainer" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" Jan 21 06:55:06 crc kubenswrapper[4913]: E0121 06:55:06.909231 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3\": container with ID starting with 9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3 not found: ID does not exist" containerID="9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3" Jan 21 06:55:06 crc kubenswrapper[4913]: I0121 06:55:06.909276 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3"} err="failed to get container status \"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3\": rpc error: code = NotFound desc = could not find container \"9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3\": container with ID starting with 9dd8e19fcbe084a6096045bf25c03872daed98af33f342f64f6a5ba95daf0ab3 not found: ID does not exist" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.153700 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.154290 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-index-4jlfb" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" containerID="cri-o://9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" gracePeriod=30 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.194065 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.204152 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/87c1ee5e7ae67bedd8aa645fa46cf5f19b7e4a53a29390ef76831ff89d2q8qg"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.371335 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.377024 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497485 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") pod \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497541 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497577 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497632 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497684 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") pod \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497703 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497741 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") pod \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\" (UID: \"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.497789 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") pod \"fde82b66-4c57-4f59-839e-5ccb89d18944\" (UID: \"fde82b66-4c57-4f59-839e-5ccb89d18944\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.505693 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77" (OuterVolumeSpecName: "kube-api-access-z6l77") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "kube-api-access-z6l77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.506830 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.506823 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.507252 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts" (OuterVolumeSpecName: "scripts") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.508198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" (UID: "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.509255 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" (UID: "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.517747 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7" (OuterVolumeSpecName: "kube-api-access-4c7c7") pod "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" (UID: "5a73c5aa-0503-4a97-918b-f7f81ec4bc4d"). InnerVolumeSpecName "kube-api-access-4c7c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.519916 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data" (OuterVolumeSpecName: "config-data") pod "fde82b66-4c57-4f59-839e-5ccb89d18944" (UID: "fde82b66-4c57-4f59-839e-5ccb89d18944"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.566782 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599693 4913 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599729 4913 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599740 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7c7\" (UniqueName: \"kubernetes.io/projected/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-kube-api-access-4c7c7\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599750 4913 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599759 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599767 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6l77\" (UniqueName: \"kubernetes.io/projected/fde82b66-4c57-4f59-839e-5ccb89d18944-kube-api-access-z6l77\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599776 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.599785 4913 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fde82b66-4c57-4f59-839e-5ccb89d18944-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.608499 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.700910 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701193 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701212 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701270 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") pod \"4f61c697-fbcc-4e33-929b-03eacd477d73\" (UID: \"4f61c697-fbcc-4e33-929b-03eacd477d73\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701290 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701324 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701340 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701664 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701726 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.701752 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702070 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702636 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") pod \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\" (UID: \"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7\") " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702915 4913 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702930 4913 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.702939 4913 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.704332 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info" (OuterVolumeSpecName: "pod-info") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.704364 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7" (OuterVolumeSpecName: "kube-api-access-n4vm7") pod "4f61c697-fbcc-4e33-929b-03eacd477d73" (UID: "4f61c697-fbcc-4e33-929b-03eacd477d73"). InnerVolumeSpecName "kube-api-access-n4vm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.704434 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.705476 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc" (OuterVolumeSpecName: "kube-api-access-rgplc") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "kube-api-access-rgplc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.712093 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965" (OuterVolumeSpecName: "persistence") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "pvc-aba37d31-e514-4471-ba72-3f2eeef63965". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.759504 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" (UID: "05b3d506-1b7a-4e74-8e75-bd5ad371a3e7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804331 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgplc\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-kube-api-access-rgplc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804408 4913 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804429 4913 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804446 4913 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804464 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4vm7\" (UniqueName: \"kubernetes.io/projected/4f61c697-fbcc-4e33-929b-03eacd477d73-kube-api-access-n4vm7\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.804534 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") on node \"crc\" " Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.828527 4913 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.828824 4913 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-aba37d31-e514-4471-ba72-3f2eeef63965" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965") on node "crc" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.877947 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" event={"ID":"fde82b66-4c57-4f59-839e-5ccb89d18944","Type":"ContainerDied","Data":"319d2fc6458bad5a006b1117b9ecf9841ebe516000026a3a782671bea30c10cd"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.878007 4913 scope.go:117] "RemoveContainer" containerID="d294c4ea90e5f953926801ea6719d5c61ee2092ec2dce8435ddc0d9c0d451588" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.878016 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystone-8b78684d-zcwhw" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883197 4913 generic.go:334] "Generic (PLEG): container finished" podID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" exitCode=0 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883237 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-index-4jlfb" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883304 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerDied","Data":"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.883414 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-index-4jlfb" event={"ID":"4f61c697-fbcc-4e33-929b-03eacd477d73","Type":"ContainerDied","Data":"d11a8cc3dae9f6c80e40b9a2d85e4439b796186712180a07c1c9bd0221b721e6"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889412 4913 generic.go:334] "Generic (PLEG): container finished" podID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" exitCode=0 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889480 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerDied","Data":"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889503 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.889504 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2" event={"ID":"5a73c5aa-0503-4a97-918b-f7f81ec4bc4d","Type":"ContainerDied","Data":"f82866f4a640b05de032b2242387c51f49628b80b3fcaf42729718719aa9d672"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.891972 4913 generic.go:334] "Generic (PLEG): container finished" podID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" exitCode=0 Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.892005 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerDied","Data":"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.892025 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/rabbitmq-server-0" event={"ID":"05b3d506-1b7a-4e74-8e75-bd5ad371a3e7","Type":"ContainerDied","Data":"f98e00a3d92598e5485091863498c91da120da08e5945df34002da676d66c385"} Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.892075 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/rabbitmq-server-0" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.905382 4913 reconciler_common.go:293] "Volume detached for volume \"pvc-aba37d31-e514-4471-ba72-3f2eeef63965\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aba37d31-e514-4471-ba72-3f2eeef63965\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.914865 4913 scope.go:117] "RemoveContainer" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.934137 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.941745 4913 scope.go:117] "RemoveContainer" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" Jan 21 06:55:07 crc kubenswrapper[4913]: E0121 06:55:07.942800 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d\": container with ID starting with 9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d not found: ID does not exist" containerID="9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.942844 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d"} err="failed to get container status \"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d\": rpc error: code = NotFound desc = could not find container \"9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d\": container with ID starting with 9dfeb25127bce1c92c4a163f5c3c844085920a01532c767ad05b1890fe6a9c6d not found: ID does not exist" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.942877 4913 scope.go:117] "RemoveContainer" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.948018 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-8b78684d-zcwhw"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.958424 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.965496 4913 scope.go:117] "RemoveContainer" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" Jan 21 06:55:07 crc kubenswrapper[4913]: E0121 06:55:07.966438 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a\": container with ID starting with 19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a not found: ID does not exist" containerID="19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.966485 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a"} err="failed to get container status \"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a\": rpc error: code = NotFound desc = could not find container \"19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a\": container with ID starting with 19547d3dab5cf206d887b58821b9cbd241a421c8d93d0373db9dfc2f8035ad7a not found: ID does not exist" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.966510 4913 scope.go:117] "RemoveContainer" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.971165 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-index-4jlfb"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.976326 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.982057 4913 scope.go:117] "RemoveContainer" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.983227 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57ddd6455-fxhz2"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.989448 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.994010 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/rabbitmq-server-0"] Jan 21 06:55:07 crc kubenswrapper[4913]: I0121 06:55:07.999395 4913 scope.go:117] "RemoveContainer" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" Jan 21 06:55:07 crc kubenswrapper[4913]: E0121 06:55:07.999886 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54\": container with ID starting with f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54 not found: ID does not exist" containerID="f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:07.999953 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54"} err="failed to get container status \"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54\": rpc error: code = NotFound desc = could not find container \"f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54\": container with ID starting with f5cd8a8066b86dcd3451ba062da0a7370a6bd8212cc92ec7d9c4b6a53fcecc54 not found: ID does not exist" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:07.999983 4913 scope.go:117] "RemoveContainer" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.000467 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d\": container with ID starting with f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d not found: ID does not exist" containerID="f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.000514 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d"} err="failed to get container status \"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d\": rpc error: code = NotFound desc = could not find container \"f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d\": container with ID starting with f59c10959e80e82ee4bdb277f821f1e60649896a8b9878492694d7dbfd3fbb5d not found: ID does not exist" Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.095345 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.096926 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.097423 4913 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.097501 4913 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 is running failed: container process not found" probeType="Readiness" pod="cinder-kuttl-tests/openstack-galera-1" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.377830 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.411636 4913 configmap.go:193] Couldn't get configMap cinder-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 21 06:55:08 crc kubenswrapper[4913]: E0121 06:55:08.411697 4913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts podName:1bcf0783-d151-4d4d-ad95-5671ec458c85 nodeName:}" failed. No retries permitted until 2026-01-21 06:55:12.411681955 +0000 UTC m=+1202.208041628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts") pod "keystonec9ef-account-delete-g88bp" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85") : configmap "openstack-scripts" not found Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.491777 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.495010 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="cinder-kuttl-tests/openstack-galera-0" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" containerID="cri-o://08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" gracePeriod=26 Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.497097 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-db-create-4g6xx"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.502541 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.506438 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystone-c9ef-account-create-update-l49vn"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.511079 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.511900 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.511944 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512011 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512070 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512093 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.512115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") pod \"edaae817-2cda-4274-bad0-53165cffa224\" (UID: \"edaae817-2cda-4274-bad0-53165cffa224\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.513017 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.514204 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.516198 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.516844 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.519577 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc" (OuterVolumeSpecName: "kube-api-access-hs7jc") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "kube-api-access-hs7jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.524234 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "edaae817-2cda-4274-bad0-53165cffa224" (UID: "edaae817-2cda-4274-bad0-53165cffa224"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.539816 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ea35b3-9885-4acc-bed4-05b6213940be" path="/var/lib/kubelet/pods/01ea35b3-9885-4acc-bed4-05b6213940be/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.541411 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" path="/var/lib/kubelet/pods/05b3d506-1b7a-4e74-8e75-bd5ad371a3e7/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.542342 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e33604-9af2-42b5-b1ad-ecd76d4898d4" path="/var/lib/kubelet/pods/15e33604-9af2-42b5-b1ad-ecd76d4898d4/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.543913 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" path="/var/lib/kubelet/pods/4f61c697-fbcc-4e33-929b-03eacd477d73/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.544704 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" path="/var/lib/kubelet/pods/5a73c5aa-0503-4a97-918b-f7f81ec4bc4d/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.545670 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce82f18-1e1d-40f1-8207-428ea9445bc3" path="/var/lib/kubelet/pods/8ce82f18-1e1d-40f1-8207-428ea9445bc3/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.547209 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" path="/var/lib/kubelet/pods/ac820b36-83fb-44ca-97b0-6181846a5ef3/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.548026 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" path="/var/lib/kubelet/pods/fde82b66-4c57-4f59-839e-5ccb89d18944/volumes" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613524 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs7jc\" (UniqueName: \"kubernetes.io/projected/edaae817-2cda-4274-bad0-53165cffa224-kube-api-access-hs7jc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613558 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/edaae817-2cda-4274-bad0-53165cffa224-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613571 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613613 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613628 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.613640 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaae817-2cda-4274-bad0-53165cffa224-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.628504 4913 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.714570 4913 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.781163 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909493 4913 generic.go:334] "Generic (PLEG): container finished" podID="edaae817-2cda-4274-bad0-53165cffa224" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" exitCode=0 Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909668 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-1" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909670 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerDied","Data":"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94"} Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909898 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-1" event={"ID":"edaae817-2cda-4274-bad0-53165cffa224","Type":"ContainerDied","Data":"9a8d3931bac549a2afa2ac686d043eed141a57ba24a7826d3e2088016b7fc44d"} Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.909955 4913 scope.go:117] "RemoveContainer" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.914526 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" event={"ID":"1bcf0783-d151-4d4d-ad95-5671ec458c85","Type":"ContainerDied","Data":"337073b67ec506202df85dc5fd188199394795c2d0bc028de5810b4d603092f0"} Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.914702 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/keystonec9ef-account-delete-g88bp" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.916214 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") pod \"1bcf0783-d151-4d4d-ad95-5671ec458c85\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.916421 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") pod \"1bcf0783-d151-4d4d-ad95-5671ec458c85\" (UID: \"1bcf0783-d151-4d4d-ad95-5671ec458c85\") " Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.918707 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bcf0783-d151-4d4d-ad95-5671ec458c85" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.923272 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs" (OuterVolumeSpecName: "kube-api-access-mttcs") pod "1bcf0783-d151-4d4d-ad95-5671ec458c85" (UID: "1bcf0783-d151-4d4d-ad95-5671ec458c85"). InnerVolumeSpecName "kube-api-access-mttcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.968978 4913 scope.go:117] "RemoveContainer" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.971082 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:55:08 crc kubenswrapper[4913]: I0121 06:55:08.975871 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-1"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.002879 4913 scope.go:117] "RemoveContainer" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.003404 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94\": container with ID starting with fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 not found: ID does not exist" containerID="fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003473 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94"} err="failed to get container status \"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94\": rpc error: code = NotFound desc = could not find container \"fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94\": container with ID starting with fd86c660a7cad48cc3dea02803acf9321ea57c10bf3dd3c576ce97cef7461a94 not found: ID does not exist" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003513 4913 scope.go:117] "RemoveContainer" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.003866 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf\": container with ID starting with b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf not found: ID does not exist" containerID="b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003907 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf"} err="failed to get container status \"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf\": rpc error: code = NotFound desc = could not find container \"b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf\": container with ID starting with b235f02946890311e0474c416e12af9faacdefe3d50c638536413139bdacddaf not found: ID does not exist" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.003932 4913 scope.go:117] "RemoveContainer" containerID="d493af4a93591f2b1c8fa78ec186c17f70ebcbeb31275b5236ba24e1c45c3275" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.019289 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bcf0783-d151-4d4d-ad95-5671ec458c85-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.019313 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mttcs\" (UniqueName: \"kubernetes.io/projected/1bcf0783-d151-4d4d-ad95-5671ec458c85-kube-api-access-mttcs\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.233789 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.248554 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.252920 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/keystonec9ef-account-delete-g88bp"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.321841 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.321916 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322025 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322043 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322064 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322089 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") pod \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\" (UID: \"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c\") " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322685 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.322767 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.323070 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.323982 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.325911 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd" (OuterVolumeSpecName: "kube-api-access-5fzmd") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "kube-api-access-5fzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.332738 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" (UID: "dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424251 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fzmd\" (UniqueName: \"kubernetes.io/projected/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kube-api-access-5fzmd\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424307 4913 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424323 4913 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424334 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424343 4913 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.424353 4913 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.434895 4913 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.525506 4913 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929145 4913 generic.go:334] "Generic (PLEG): container finished" podID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" exitCode=0 Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929210 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerDied","Data":"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2"} Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929236 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cinder-kuttl-tests/openstack-galera-0" event={"ID":"dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c","Type":"ContainerDied","Data":"275c8c95693d8bda2924cf30883ec6b9a72c6ebcd833f17576c0c6f8b32bd1ac"} Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929251 4913 scope.go:117] "RemoveContainer" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.929248 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="cinder-kuttl-tests/openstack-galera-0" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.955851 4913 scope.go:117] "RemoveContainer" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.969818 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.975049 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["cinder-kuttl-tests/openstack-galera-0"] Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.996704 4913 scope.go:117] "RemoveContainer" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.997111 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2\": container with ID starting with 08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2 not found: ID does not exist" containerID="08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.997145 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2"} err="failed to get container status \"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2\": rpc error: code = NotFound desc = could not find container \"08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2\": container with ID starting with 08757b55fa876109bec964f7f34697257a3cf781e6734d0f263a5025be4a23f2 not found: ID does not exist" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.997173 4913 scope.go:117] "RemoveContainer" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" Jan 21 06:55:09 crc kubenswrapper[4913]: E0121 06:55:09.997829 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8\": container with ID starting with 11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8 not found: ID does not exist" containerID="11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8" Jan 21 06:55:09 crc kubenswrapper[4913]: I0121 06:55:09.997898 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8"} err="failed to get container status \"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8\": rpc error: code = NotFound desc = could not find container \"11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8\": container with ID starting with 11d7387da0d2857bec0800ecf4d27e544a1c9efca5021f5880468a4b755bd4a8 not found: ID does not exist" Jan 21 06:55:10 crc kubenswrapper[4913]: I0121 06:55:10.538552 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" path="/var/lib/kubelet/pods/1bcf0783-d151-4d4d-ad95-5671ec458c85/volumes" Jan 21 06:55:10 crc kubenswrapper[4913]: I0121 06:55:10.539560 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" path="/var/lib/kubelet/pods/dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c/volumes" Jan 21 06:55:10 crc kubenswrapper[4913]: I0121 06:55:10.540197 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edaae817-2cda-4274-bad0-53165cffa224" path="/var/lib/kubelet/pods/edaae817-2cda-4274-bad0-53165cffa224/volumes" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.020394 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.020761 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" containerID="cri-o://e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" gracePeriod=10 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.312096 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.312253 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-nvvrn" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" containerID="cri-o://4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" gracePeriod=30 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.381652 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.399802 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/34e4e076b11e40e2796f19ad3bfdac5929942b93224fbc520400e0a069qvqsw"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.546486 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.662986 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") pod \"2eed1c9d-583b-4678-a6d4-25ede526deb2\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.663052 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") pod \"2eed1c9d-583b-4678-a6d4-25ede526deb2\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.663115 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") pod \"2eed1c9d-583b-4678-a6d4-25ede526deb2\" (UID: \"2eed1c9d-583b-4678-a6d4-25ede526deb2\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.672820 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm" (OuterVolumeSpecName: "kube-api-access-69pvm") pod "2eed1c9d-583b-4678-a6d4-25ede526deb2" (UID: "2eed1c9d-583b-4678-a6d4-25ede526deb2"). InnerVolumeSpecName "kube-api-access-69pvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.675630 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2eed1c9d-583b-4678-a6d4-25ede526deb2" (UID: "2eed1c9d-583b-4678-a6d4-25ede526deb2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.675730 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "2eed1c9d-583b-4678-a6d4-25ede526deb2" (UID: "2eed1c9d-583b-4678-a6d4-25ede526deb2"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.707270 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.764704 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") pod \"aafa8ec9-8d47-454f-ade6-cc83939b040d\" (UID: \"aafa8ec9-8d47-454f-ade6-cc83939b040d\") " Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.765042 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69pvm\" (UniqueName: \"kubernetes.io/projected/2eed1c9d-583b-4678-a6d4-25ede526deb2-kube-api-access-69pvm\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.765069 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.765081 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2eed1c9d-583b-4678-a6d4-25ede526deb2-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.767724 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr" (OuterVolumeSpecName: "kube-api-access-d5nnr") pod "aafa8ec9-8d47-454f-ade6-cc83939b040d" (UID: "aafa8ec9-8d47-454f-ade6-cc83939b040d"). InnerVolumeSpecName "kube-api-access-d5nnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.866252 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nnr\" (UniqueName: \"kubernetes.io/projected/aafa8ec9-8d47-454f-ade6-cc83939b040d-kube-api-access-d5nnr\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960220 4913 generic.go:334] "Generic (PLEG): container finished" podID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" exitCode=0 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960299 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960302 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerDied","Data":"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960408 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv" event={"ID":"2eed1c9d-583b-4678-a6d4-25ede526deb2","Type":"ContainerDied","Data":"5f11053bf6e8005edf5c878b1053cb5b2f458f735b16ba02d777871ab59cfd24"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.960431 4913 scope.go:117] "RemoveContainer" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962525 4913 generic.go:334] "Generic (PLEG): container finished" podID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" exitCode=0 Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962574 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerDied","Data":"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962620 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nvvrn" event={"ID":"aafa8ec9-8d47-454f-ade6-cc83939b040d","Type":"ContainerDied","Data":"13b2addf8c21bece7103dc74546b4b535b876e4503f539b9501595a1b88972a6"} Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.962638 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nvvrn" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.979837 4913 scope.go:117] "RemoveContainer" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" Jan 21 06:55:11 crc kubenswrapper[4913]: E0121 06:55:11.980451 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2\": container with ID starting with e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2 not found: ID does not exist" containerID="e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.980502 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2"} err="failed to get container status \"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2\": rpc error: code = NotFound desc = could not find container \"e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2\": container with ID starting with e5fc382a79a0ef953f273b991882a66929f329167f017a319a10bb15d3544fe2 not found: ID does not exist" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.980534 4913 scope.go:117] "RemoveContainer" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.991900 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:55:11 crc kubenswrapper[4913]: I0121 06:55:11.996924 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-54fff4d8f9-xtknv"] Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.001894 4913 scope.go:117] "RemoveContainer" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" Jan 21 06:55:12 crc kubenswrapper[4913]: E0121 06:55:12.003929 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741\": container with ID starting with 4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741 not found: ID does not exist" containerID="4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.003973 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741"} err="failed to get container status \"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741\": rpc error: code = NotFound desc = could not find container \"4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741\": container with ID starting with 4f06c0e1c66532687841df84cf88404c6cbe7438c83c057d01e077bf8b97d741 not found: ID does not exist" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.008612 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.016691 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-nvvrn"] Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.374122 4913 prober.go:107] "Probe failed" probeType="Readiness" pod="cinder-kuttl-tests/rabbitmq-server-0" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.71:5672: i/o timeout" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.534502 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" path="/var/lib/kubelet/pods/2eed1c9d-583b-4678-a6d4-25ede526deb2/volumes" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.534976 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" path="/var/lib/kubelet/pods/aafa8ec9-8d47-454f-ade6-cc83939b040d/volumes" Jan 21 06:55:12 crc kubenswrapper[4913]: I0121 06:55:12.535423 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3feb49b-10bf-4116-91b9-e9b726161892" path="/var/lib/kubelet/pods/e3feb49b-10bf-4116-91b9-e9b726161892/volumes" Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.721076 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.721638 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" containerID="cri-o://3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630" gracePeriod=10 Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.981290 4913 generic.go:334] "Generic (PLEG): container finished" podID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerID="3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630" exitCode=0 Jan 21 06:55:13 crc kubenswrapper[4913]: I0121 06:55:13.981339 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerDied","Data":"3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630"} Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.002859 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.003117 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" containerID="cri-o://9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" gracePeriod=30 Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.033848 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.041668 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590jxmch"] Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.237889 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.296018 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") pod \"f401b62e-8ebd-413e-a383-d9e74626c3d4\" (UID: \"f401b62e-8ebd-413e-a383-d9e74626c3d4\") " Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.303307 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9" (OuterVolumeSpecName: "kube-api-access-9c6s9") pod "f401b62e-8ebd-413e-a383-d9e74626c3d4" (UID: "f401b62e-8ebd-413e-a383-d9e74626c3d4"). InnerVolumeSpecName "kube-api-access-9c6s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.399850 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c6s9\" (UniqueName: \"kubernetes.io/projected/f401b62e-8ebd-413e-a383-d9e74626c3d4-kube-api-access-9c6s9\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.412607 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.501011 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") pod \"d5725475-8b61-45a7-91e8-1d28e9042910\" (UID: \"d5725475-8b61-45a7-91e8-1d28e9042910\") " Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.503853 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68" (OuterVolumeSpecName: "kube-api-access-ltk68") pod "d5725475-8b61-45a7-91e8-1d28e9042910" (UID: "d5725475-8b61-45a7-91e8-1d28e9042910"). InnerVolumeSpecName "kube-api-access-ltk68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.534381 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8829e790-ce91-4ccb-8b7b-955b5d4cc3ff" path="/var/lib/kubelet/pods/8829e790-ce91-4ccb-8b7b-955b5d4cc3ff/volumes" Jan 21 06:55:14 crc kubenswrapper[4913]: I0121 06:55:14.602743 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltk68\" (UniqueName: \"kubernetes.io/projected/d5725475-8b61-45a7-91e8-1d28e9042910-kube-api-access-ltk68\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003459 4913 generic.go:334] "Generic (PLEG): container finished" podID="d5725475-8b61-45a7-91e8-1d28e9042910" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" exitCode=0 Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003548 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003634 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerDied","Data":"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30"} Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003720 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-5dtkj" event={"ID":"d5725475-8b61-45a7-91e8-1d28e9042910","Type":"ContainerDied","Data":"f8fbd38ff1590a71df6d9f315408484aabe627daa085b88b69fdaab05a28c092"} Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.003747 4913 scope.go:117] "RemoveContainer" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.005803 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" event={"ID":"f401b62e-8ebd-413e-a383-d9e74626c3d4","Type":"ContainerDied","Data":"35b959c40aff587948c5fd74b98b898c0bc76e951ec34079c1bec3b80111a1d1"} Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.005865 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.036919 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.036965 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-d8cz5"] Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.048155 4913 scope.go:117] "RemoveContainer" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" Jan 21 06:55:15 crc kubenswrapper[4913]: E0121 06:55:15.048735 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30\": container with ID starting with 9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30 not found: ID does not exist" containerID="9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.048803 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30"} err="failed to get container status \"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30\": rpc error: code = NotFound desc = could not find container \"9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30\": container with ID starting with 9a8c54797956db1a12fff6730fe2ff3f719b6288e1eb7c5bacb7068e3a55eb30 not found: ID does not exist" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.048842 4913 scope.go:117] "RemoveContainer" containerID="3e8408a1f2b92dbab63b868892a8ba456a45462b5c6f6463c5d4225f6c7d6630" Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.050977 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:55:15 crc kubenswrapper[4913]: I0121 06:55:15.061688 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-5dtkj"] Jan 21 06:55:16 crc kubenswrapper[4913]: I0121 06:55:16.536882 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" path="/var/lib/kubelet/pods/d5725475-8b61-45a7-91e8-1d28e9042910/volumes" Jan 21 06:55:16 crc kubenswrapper[4913]: I0121 06:55:16.538418 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" path="/var/lib/kubelet/pods/f401b62e-8ebd-413e-a383-d9e74626c3d4/volumes" Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.501019 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.501503 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" containerID="cri-o://8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f" gracePeriod=10 Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.781455 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.781950 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-9rr22" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" containerID="cri-o://2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f" gracePeriod=30 Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.823264 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:55:18 crc kubenswrapper[4913]: I0121 06:55:18.827894 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/ea82580bc5724477f94b47db468c840840d4aaf95efc52f7d04b6353c12qmw6"] Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.045703 4913 generic.go:334] "Generic (PLEG): container finished" podID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerID="2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f" exitCode=0 Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.045780 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerDied","Data":"2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f"} Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.047827 4913 generic.go:334] "Generic (PLEG): container finished" podID="0400ab56-f59a-4483-83d7-56db6e482138" containerID="8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f" exitCode=0 Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.047857 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerDied","Data":"8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f"} Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.466867 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.572503 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") pod \"0400ab56-f59a-4483-83d7-56db6e482138\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.572568 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") pod \"0400ab56-f59a-4483-83d7-56db6e482138\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.572621 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") pod \"0400ab56-f59a-4483-83d7-56db6e482138\" (UID: \"0400ab56-f59a-4483-83d7-56db6e482138\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.577764 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0400ab56-f59a-4483-83d7-56db6e482138" (UID: "0400ab56-f59a-4483-83d7-56db6e482138"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.578809 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg" (OuterVolumeSpecName: "kube-api-access-c2srg") pod "0400ab56-f59a-4483-83d7-56db6e482138" (UID: "0400ab56-f59a-4483-83d7-56db6e482138"). InnerVolumeSpecName "kube-api-access-c2srg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.581740 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "0400ab56-f59a-4483-83d7-56db6e482138" (UID: "0400ab56-f59a-4483-83d7-56db6e482138"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.676232 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.676515 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0400ab56-f59a-4483-83d7-56db6e482138-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.676524 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2srg\" (UniqueName: \"kubernetes.io/projected/0400ab56-f59a-4483-83d7-56db6e482138-kube-api-access-c2srg\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.722448 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.777764 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") pod \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\" (UID: \"c40a34d4-0ef1-4aff-bc37-87c27e191d1f\") " Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.782248 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq" (OuterVolumeSpecName: "kube-api-access-dwvxq") pod "c40a34d4-0ef1-4aff-bc37-87c27e191d1f" (UID: "c40a34d4-0ef1-4aff-bc37-87c27e191d1f"). InnerVolumeSpecName "kube-api-access-dwvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:19 crc kubenswrapper[4913]: I0121 06:55:19.879906 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvxq\" (UniqueName: \"kubernetes.io/projected/c40a34d4-0ef1-4aff-bc37-87c27e191d1f-kube-api-access-dwvxq\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.055545 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9rr22" event={"ID":"c40a34d4-0ef1-4aff-bc37-87c27e191d1f","Type":"ContainerDied","Data":"cbc301953611f343a8ff50e4c3bf31b93f4bad200d2da6f98a2d22ff700514b5"} Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.055615 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9rr22" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.055669 4913 scope.go:117] "RemoveContainer" containerID="2e10db8fdda3ab4fed4aafd0a7ed0a0ac45d6a1e90610404aa5d6acc88641f0f" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.058121 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" event={"ID":"0400ab56-f59a-4483-83d7-56db6e482138","Type":"ContainerDied","Data":"34dad13f69ec8ebac45464947f13649925c0f206b2f50748da475e0fdda03067"} Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.058200 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.072933 4913 scope.go:117] "RemoveContainer" containerID="8bc41a6fd10a55de8137ca15ea9ded0860bb2e67069e5ae93f7bf376b260b29f" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.093842 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.098164 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-9rr22"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.107054 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.110386 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54c44d6596-mtzxf"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.414906 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.415116 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" containerID="cri-o://c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" gracePeriod=10 Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.542459 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0400ab56-f59a-4483-83d7-56db6e482138" path="/var/lib/kubelet/pods/0400ab56-f59a-4483-83d7-56db6e482138/volumes" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.543117 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" path="/var/lib/kubelet/pods/c40a34d4-0ef1-4aff-bc37-87c27e191d1f/volumes" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.543761 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9a93fdf-fffb-4344-8ac8-81d8be41eea7" path="/var/lib/kubelet/pods/f9a93fdf-fffb-4344-8ac8-81d8be41eea7/volumes" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.714871 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.715056 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-jqn8q" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" containerID="cri-o://08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118" gracePeriod=30 Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.741884 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.757102 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/a53044f29d0f89c197912873b7cc34569484f5de61ee55394f4939720b278gb"] Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.859973 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.893182 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") pod \"463ce3c4-98b5-41f1-bf36-f271228094e5\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.893279 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") pod \"463ce3c4-98b5-41f1-bf36-f271228094e5\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.893318 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") pod \"463ce3c4-98b5-41f1-bf36-f271228094e5\" (UID: \"463ce3c4-98b5-41f1-bf36-f271228094e5\") " Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.897409 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "463ce3c4-98b5-41f1-bf36-f271228094e5" (UID: "463ce3c4-98b5-41f1-bf36-f271228094e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.898064 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "463ce3c4-98b5-41f1-bf36-f271228094e5" (UID: "463ce3c4-98b5-41f1-bf36-f271228094e5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.910738 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28" (OuterVolumeSpecName: "kube-api-access-9nr28") pod "463ce3c4-98b5-41f1-bf36-f271228094e5" (UID: "463ce3c4-98b5-41f1-bf36-f271228094e5"). InnerVolumeSpecName "kube-api-access-9nr28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.995212 4913 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.995243 4913 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/463ce3c4-98b5-41f1-bf36-f271228094e5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:20 crc kubenswrapper[4913]: I0121 06:55:20.995255 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nr28\" (UniqueName: \"kubernetes.io/projected/463ce3c4-98b5-41f1-bf36-f271228094e5-kube-api-access-9nr28\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.067626 4913 generic.go:334] "Generic (PLEG): container finished" podID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerID="08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118" exitCode=0 Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.067685 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerDied","Data":"08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118"} Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069237 4913 generic.go:334] "Generic (PLEG): container finished" podID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" exitCode=0 Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069283 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerDied","Data":"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63"} Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069298 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" event={"ID":"463ce3c4-98b5-41f1-bf36-f271228094e5","Type":"ContainerDied","Data":"cdb4e0359c30333e0f60a3a2aacd5b8acc1752dfe1e6b054f476c25f2495d9b4"} Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069317 4913 scope.go:117] "RemoveContainer" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.069310 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.104485 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.105415 4913 scope.go:117] "RemoveContainer" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" Jan 21 06:55:21 crc kubenswrapper[4913]: E0121 06:55:21.105766 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63\": container with ID starting with c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63 not found: ID does not exist" containerID="c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.105803 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63"} err="failed to get container status \"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63\": rpc error: code = NotFound desc = could not find container \"c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63\": container with ID starting with c6715a75285718b94b5df476723952c05bf8d0f14986ef318cca917387992b63 not found: ID does not exist" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.116896 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.126435 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-844d49f546-dqkph"] Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.197768 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") pod \"211a0853-fb6a-4002-98be-aa01c99eaa7d\" (UID: \"211a0853-fb6a-4002-98be-aa01c99eaa7d\") " Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.202511 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg" (OuterVolumeSpecName: "kube-api-access-4rhrg") pod "211a0853-fb6a-4002-98be-aa01c99eaa7d" (UID: "211a0853-fb6a-4002-98be-aa01c99eaa7d"). InnerVolumeSpecName "kube-api-access-4rhrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 06:55:21 crc kubenswrapper[4913]: I0121 06:55:21.299557 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhrg\" (UniqueName: \"kubernetes.io/projected/211a0853-fb6a-4002-98be-aa01c99eaa7d-kube-api-access-4rhrg\") on node \"crc\" DevicePath \"\"" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.081582 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-jqn8q" event={"ID":"211a0853-fb6a-4002-98be-aa01c99eaa7d","Type":"ContainerDied","Data":"db0a46a8fe44c42a9419cf6ef2e55a5438175fdc2e025cff0d77a8ceb555851b"} Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.081692 4913 scope.go:117] "RemoveContainer" containerID="08c9124a25a046b9932dc92016fc3cd4ef993ec44a4675b0ed9121feef7e0118" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.081763 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-jqn8q" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.121635 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.124392 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-jqn8q"] Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.539646 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" path="/var/lib/kubelet/pods/211a0853-fb6a-4002-98be-aa01c99eaa7d/volumes" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.540903 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" path="/var/lib/kubelet/pods/463ce3c4-98b5-41f1-bf36-f271228094e5/volumes" Jan 21 06:55:22 crc kubenswrapper[4913]: I0121 06:55:22.542126 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="980a7b2a-b9d1-4935-ac4c-9ac4a4730138" path="/var/lib/kubelet/pods/980a7b2a-b9d1-4935-ac4c-9ac4a4730138/volumes" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.249550 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250756 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250773 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250784 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250791 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250805 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250814 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250822 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250828 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250839 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250845 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250855 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250862 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250871 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250877 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250887 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250893 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250901 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250907 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250915 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250921 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250929 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250935 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250944 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="setup-container" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250950 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="setup-container" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250957 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250965 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250974 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.250982 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.250996 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251004 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251012 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251018 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251026 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251033 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251040 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251046 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251055 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251061 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="mysql-bootstrap" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251075 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251082 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251091 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251097 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251199 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eed1c9d-583b-4678-a6d4-25ede526deb2" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251212 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251221 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="463ce3c4-98b5-41f1-bf36-f271228094e5" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251227 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="edaae817-2cda-4274-bad0-53165cffa224" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251236 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251244 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedf2cc4-5f64-40c5-83da-cf1e0cfebf6c" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251253 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a73c5aa-0503-4a97-918b-f7f81ec4bc4d" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251262 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="211a0853-fb6a-4002-98be-aa01c99eaa7d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251269 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40a34d4-0ef1-4aff-bc37-87c27e191d1f" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251275 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f61c697-fbcc-4e33-929b-03eacd477d73" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251286 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5725475-8b61-45a7-91e8-1d28e9042910" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251292 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="aafa8ec9-8d47-454f-ade6-cc83939b040d" containerName="registry-server" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251299 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b1fd9b-951d-4d8e-8a08-4a2e8d820370" containerName="galera" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251308 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0400ab56-f59a-4483-83d7-56db6e482138" containerName="manager" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251316 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="f401b62e-8ebd-413e-a383-d9e74626c3d4" containerName="operator" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251322 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b3d506-1b7a-4e74-8e75-bd5ad371a3e7" containerName="rabbitmq" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251330 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac820b36-83fb-44ca-97b0-6181846a5ef3" containerName="memcached" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251338 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde82b66-4c57-4f59-839e-5ccb89d18944" containerName="keystone-api" Jan 21 06:55:36 crc kubenswrapper[4913]: E0121 06:55:36.251442 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.251457 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcf0783-d151-4d4d-ad95-5671ec458c85" containerName="mariadb-account-delete" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.252139 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.255873 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ffklg"/"openshift-service-ca.crt" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.255877 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ffklg"/"default-dockercfg-wl5zr" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.258745 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ffklg"/"kube-root-ca.crt" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.308793 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.317615 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.317688 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.418535 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.418615 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.418964 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.442963 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"must-gather-ftkd9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.571978 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 06:55:36 crc kubenswrapper[4913]: I0121 06:55:36.820489 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 06:55:36 crc kubenswrapper[4913]: W0121 06:55:36.834788 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0860e960_b47d_4cea_9c37_d28691c9a4d9.slice/crio-63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3 WatchSource:0}: Error finding container 63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3: Status 404 returned error can't find the container with id 63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3 Jan 21 06:55:37 crc kubenswrapper[4913]: I0121 06:55:37.217279 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerStarted","Data":"63ad5e0bcf2029c379f5702ad72f1ec3148f61b76e7031734c9ec666610b43a3"} Jan 21 06:55:43 crc kubenswrapper[4913]: I0121 06:55:43.259285 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerStarted","Data":"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c"} Jan 21 06:55:43 crc kubenswrapper[4913]: I0121 06:55:43.259890 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerStarted","Data":"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60"} Jan 21 06:55:43 crc kubenswrapper[4913]: I0121 06:55:43.294951 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ffklg/must-gather-ftkd9" podStartSLOduration=1.6540018330000001 podStartE2EDuration="7.294927069s" podCreationTimestamp="2026-01-21 06:55:36 +0000 UTC" firstStartedPulling="2026-01-21 06:55:36.836111874 +0000 UTC m=+1226.632471547" lastFinishedPulling="2026-01-21 06:55:42.47703711 +0000 UTC m=+1232.273396783" observedRunningTime="2026-01-21 06:55:43.278013332 +0000 UTC m=+1233.074373025" watchObservedRunningTime="2026-01-21 06:55:43.294927069 +0000 UTC m=+1233.091286752" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.591043 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.605098 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.612089 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 06:55:51 crc kubenswrapper[4913]: I0121 06:55:51.633361 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.125952 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.137652 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.143319 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.151646 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.159893 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.169104 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.176045 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.182475 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.213519 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.222536 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.377245 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 06:55:52 crc kubenswrapper[4913]: I0121 06:55:52.383074 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 06:55:56 crc kubenswrapper[4913]: E0121 06:55:56.555077 4913 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.029s" Jan 21 06:56:00 crc kubenswrapper[4913]: I0121 06:56:00.101510 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 06:56:00 crc kubenswrapper[4913]: I0121 06:56:00.118808 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 06:56:00 crc kubenswrapper[4913]: I0121 06:56:00.124523 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 06:56:08 crc kubenswrapper[4913]: I0121 06:56:08.319543 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:56:08 crc kubenswrapper[4913]: I0121 06:56:08.320271 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.750969 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.757227 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.769883 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 06:56:10 crc kubenswrapper[4913]: I0121 06:56:10.798583 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.230636 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.242869 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.246992 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.255640 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.261270 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.267762 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.274249 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.281955 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.306779 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.316332 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.426764 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 06:56:11 crc kubenswrapper[4913]: I0121 06:56:11.436420 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.592236 4913 scope.go:117] "RemoveContainer" containerID="e2875581fbd572dea4f4e410e08bce794cd12bf464303d41fbc9d66b0d7fcef6" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.619617 4913 scope.go:117] "RemoveContainer" containerID="7ea6d30dbbc206edb2f162346b01f1b70cea4ff52c09855b5688ceae555cd86f" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.640979 4913 scope.go:117] "RemoveContainer" containerID="ab7eba0415a79bbb3100d97d9966a99002b3c45fe402ca2d92dfeca4328093d3" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.661664 4913 scope.go:117] "RemoveContainer" containerID="37009c48c11ee62bd23237579f9cc9c8d427c5cbaddb700f28802586ebc40376" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.683430 4913 scope.go:117] "RemoveContainer" containerID="436316e77fad673adee43600b81c8e8cb659f723e40fde5ac692b7f2f5e51c80" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.703213 4913 scope.go:117] "RemoveContainer" containerID="e07664685c33ef4c385016551f2a546716097259da2d998a774abfcc7e395a11" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.720882 4913 scope.go:117] "RemoveContainer" containerID="660368d7d30a6dcd15b89683468c16579bae9e6ba5e62cde1ef85f9aba8de9d8" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.743258 4913 scope.go:117] "RemoveContainer" containerID="78cb1589a905d5dddd50883207b015f6195746217f5f18c55b7dfc51421182fe" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.760019 4913 scope.go:117] "RemoveContainer" containerID="ad4e9a5725df8589b88531ce1b43bf4f6ce13685f4a05de5d6349c46dcad6a4a" Jan 21 06:56:12 crc kubenswrapper[4913]: I0121 06:56:12.775076 4913 scope.go:117] "RemoveContainer" containerID="d9191512905a50023a8bd3340913a6390b0e97c743493bde552499fe3bccd78f" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.208503 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/extract/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.221020 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/util/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.229283 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/pull/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.500885 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/registry-server/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.506050 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-utilities/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.513737 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-content/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.744370 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/registry-server/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.749074 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-utilities/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.754995 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-content/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.768573 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mmmzm_9850b956-f0a1-4e29-b5c2-703b0aa7b697/marketplace-operator/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.823215 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/registry-server/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.828033 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-utilities/0.log" Jan 21 06:56:21 crc kubenswrapper[4913]: I0121 06:56:21.837008 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-content/0.log" Jan 21 06:56:22 crc kubenswrapper[4913]: I0121 06:56:22.108175 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/registry-server/0.log" Jan 21 06:56:22 crc kubenswrapper[4913]: I0121 06:56:22.112820 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-utilities/0.log" Jan 21 06:56:22 crc kubenswrapper[4913]: I0121 06:56:22.120952 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-content/0.log" Jan 21 06:56:38 crc kubenswrapper[4913]: I0121 06:56:38.319347 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:56:38 crc kubenswrapper[4913]: I0121 06:56:38.320006 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.599794 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.607140 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.625333 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 06:56:41 crc kubenswrapper[4913]: I0121 06:56:41.642048 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.113212 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.122624 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.127474 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.136085 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.143376 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.149700 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.156182 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.163907 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.193237 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.201789 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.326305 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.334716 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 06:56:42 crc kubenswrapper[4913]: I0121 06:56:42.993648 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 06:56:43 crc kubenswrapper[4913]: I0121 06:56:43.011348 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 06:56:43 crc kubenswrapper[4913]: I0121 06:56:43.020561 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.035319 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/kube-multus-additional-cni-plugins/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.042084 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/egress-router-binary-copy/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.048862 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/cni-plugins/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.055807 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/bond-cni-plugin/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.063884 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/routeoverride-cni/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.072308 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni-bincopy/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.080964 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.097354 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/multus-admission-controller/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.102566 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/kube-rbac-proxy/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.157186 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/3.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.160284 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.192699 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/network-metrics-daemon/0.log" Jan 21 06:56:44 crc kubenswrapper[4913]: I0121 06:56:44.198931 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/kube-rbac-proxy/0.log" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.318730 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.319257 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.319311 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.320065 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 06:57:08 crc kubenswrapper[4913]: I0121 06:57:08.320156 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800" gracePeriod=600 Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.057669 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800" exitCode=0 Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.057749 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800"} Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.058114 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64"} Jan 21 06:57:09 crc kubenswrapper[4913]: I0121 06:57:09.058140 4913 scope.go:117] "RemoveContainer" containerID="bc5cf18289e8129ab669a4c6ce772cd7b24630b3756f5dce3bd40297fec710a6" Jan 21 06:57:12 crc kubenswrapper[4913]: I0121 06:57:12.903841 4913 scope.go:117] "RemoveContainer" containerID="f64e19c7af4171a78023ca3711a7eec83f0f3b9547ff3c69e634b90c2c0582db" Jan 21 06:57:12 crc kubenswrapper[4913]: I0121 06:57:12.936812 4913 scope.go:117] "RemoveContainer" containerID="461bda799565e5924857f0b3e4f758b75acec0c9a9a9ac5312facf66ecd33abe" Jan 21 06:57:12 crc kubenswrapper[4913]: I0121 06:57:12.978050 4913 scope.go:117] "RemoveContainer" containerID="60ac37c77e23483afc0614ffbcd77f3112a7195bb5009179ec07fc76cbf42d75" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.037552 4913 scope.go:117] "RemoveContainer" containerID="65cd0cea93dd7b6363d627c73b949bbd4db992664864de3f7764764d0faf09c3" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.067053 4913 scope.go:117] "RemoveContainer" containerID="a22a190e1ff258d8fa0dac8b8437c0615341f76e3c06c6668ce4c7053be4e2a1" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.096046 4913 scope.go:117] "RemoveContainer" containerID="92a35170c3a228e725dc4577bc820bf64539f262c32a337b31f25d4c32fe9af7" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.118163 4913 scope.go:117] "RemoveContainer" containerID="b5c65ed731440220892793dc0f5f5c1250a99d03d67a71b6685779fcad076adc" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.137385 4913 scope.go:117] "RemoveContainer" containerID="e3dd37a383fdecec61ce46d8e11f1c1ebc9bad6811b9f26d39315ce8ccbe7680" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.156938 4913 scope.go:117] "RemoveContainer" containerID="57ae751f0ac8e317da709793a9908e104d2805bb250e026f326c599fd971bccb" Jan 21 06:58:13 crc kubenswrapper[4913]: I0121 06:58:13.191086 4913 scope.go:117] "RemoveContainer" containerID="659f77de9cd51e636b4491f066c76f374e6bc4986d8367fb979a0e570225f47e" Jan 21 06:59:08 crc kubenswrapper[4913]: I0121 06:59:08.319124 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:59:08 crc kubenswrapper[4913]: I0121 06:59:08.320032 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 06:59:13 crc kubenswrapper[4913]: I0121 06:59:13.269458 4913 scope.go:117] "RemoveContainer" containerID="9e53798c55bdb9c3197d82fb273c6d283738061cb335b3befb8f5bcffda2529a" Jan 21 06:59:13 crc kubenswrapper[4913]: I0121 06:59:13.330662 4913 scope.go:117] "RemoveContainer" containerID="bd6ec813f35e1b00979db706282fff37dec10f9429ac91969695adf37e90c613" Jan 21 06:59:13 crc kubenswrapper[4913]: I0121 06:59:13.347400 4913 scope.go:117] "RemoveContainer" containerID="adff856e206c29a325cd94fb3c5dd663d301aeb9797e99aeea00e843d581cf79" Jan 21 06:59:38 crc kubenswrapper[4913]: I0121 06:59:38.318480 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 06:59:38 crc kubenswrapper[4913]: I0121 06:59:38.318967 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.159909 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj"] Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.162470 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.164218 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.168438 4913 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.180210 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj"] Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.220669 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.220709 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.220769 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.321817 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.321896 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.321924 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.326913 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.335582 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.338797 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"collect-profiles-29482980-n79tj\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.511374 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:00 crc kubenswrapper[4913]: I0121 07:00:00.730277 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj"] Jan 21 07:00:00 crc kubenswrapper[4913]: W0121 07:00:00.741270 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a66fbcc_cc17_4d20_bb3a_36d3f9ad2a90.slice/crio-3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2 WatchSource:0}: Error finding container 3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2: Status 404 returned error can't find the container with id 3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2 Jan 21 07:00:01 crc kubenswrapper[4913]: I0121 07:00:01.370842 4913 generic.go:334] "Generic (PLEG): container finished" podID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerID="6ff2eb7824bb9caeb6a07d4f757d7b8e5c74d440d7ed3872dc95969b6f7c97b2" exitCode=0 Jan 21 07:00:01 crc kubenswrapper[4913]: I0121 07:00:01.370978 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" event={"ID":"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90","Type":"ContainerDied","Data":"6ff2eb7824bb9caeb6a07d4f757d7b8e5c74d440d7ed3872dc95969b6f7c97b2"} Jan 21 07:00:01 crc kubenswrapper[4913]: I0121 07:00:01.371174 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" event={"ID":"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90","Type":"ContainerStarted","Data":"3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2"} Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.656665 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.856800 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") pod \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.856893 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") pod \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.856977 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") pod \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\" (UID: \"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90\") " Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.858136 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume" (OuterVolumeSpecName: "config-volume") pod "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" (UID: "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.865681 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4" (OuterVolumeSpecName: "kube-api-access-pxpm4") pod "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" (UID: "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90"). InnerVolumeSpecName "kube-api-access-pxpm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.866031 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" (UID: "9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.958344 4913 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.958386 4913 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:02 crc kubenswrapper[4913]: I0121 07:00:02.958404 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxpm4\" (UniqueName: \"kubernetes.io/projected/9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90-kube-api-access-pxpm4\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:03 crc kubenswrapper[4913]: I0121 07:00:03.388956 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" event={"ID":"9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90","Type":"ContainerDied","Data":"3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2"} Jan 21 07:00:03 crc kubenswrapper[4913]: I0121 07:00:03.389012 4913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3184add7134d09e0b7cd14a0634ff21edf4511901b2314b25541f97537495de2" Jan 21 07:00:03 crc kubenswrapper[4913]: I0121 07:00:03.389066 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29482980-n79tj" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.319713 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.319822 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.319892 4913 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.320752 4913 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64"} pod="openshift-machine-config-operator/machine-config-daemon-sqswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 07:00:08 crc kubenswrapper[4913]: I0121 07:00:08.320882 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" containerID="cri-o://71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" gracePeriod=600 Jan 21 07:00:08 crc kubenswrapper[4913]: E0121 07:00:08.525237 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.462779 4913 generic.go:334] "Generic (PLEG): container finished" podID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" exitCode=0 Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.462830 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerDied","Data":"71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64"} Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.462989 4913 scope.go:117] "RemoveContainer" containerID="a2c787182ddbb441e492633cc6e45e58f8a8e786c3e7cf757004c49b480a8800" Jan 21 07:00:09 crc kubenswrapper[4913]: I0121 07:00:09.464115 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:09 crc kubenswrapper[4913]: E0121 07:00:09.464895 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:24 crc kubenswrapper[4913]: I0121 07:00:24.526799 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:24 crc kubenswrapper[4913]: E0121 07:00:24.528379 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:36 crc kubenswrapper[4913]: I0121 07:00:36.527438 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:36 crc kubenswrapper[4913]: E0121 07:00:36.528397 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.043537 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:00:38 crc kubenswrapper[4913]: E0121 07:00:38.043821 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerName="collect-profiles" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.043837 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerName="collect-profiles" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.044206 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a66fbcc-cc17-4d20-bb3a-36d3f9ad2a90" containerName="collect-profiles" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.047689 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.059247 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.206599 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.206699 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.206778 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.307831 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.307938 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.308004 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.308354 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.308685 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.336754 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"redhat-operators-9757r\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.375290 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.606007 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:00:38 crc kubenswrapper[4913]: I0121 07:00:38.700197 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerStarted","Data":"44ae31c06ae6bf3ba2a16e82bee60b8ea3b82de96c830a79f05c92b64bfbd570"} Jan 21 07:00:39 crc kubenswrapper[4913]: I0121 07:00:39.711840 4913 generic.go:334] "Generic (PLEG): container finished" podID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" exitCode=0 Jan 21 07:00:39 crc kubenswrapper[4913]: I0121 07:00:39.711898 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761"} Jan 21 07:00:39 crc kubenswrapper[4913]: I0121 07:00:39.715313 4913 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 07:00:40 crc kubenswrapper[4913]: I0121 07:00:40.719008 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerStarted","Data":"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18"} Jan 21 07:00:41 crc kubenswrapper[4913]: I0121 07:00:41.726371 4913 generic.go:334] "Generic (PLEG): container finished" podID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" exitCode=0 Jan 21 07:00:41 crc kubenswrapper[4913]: I0121 07:00:41.726476 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18"} Jan 21 07:00:42 crc kubenswrapper[4913]: I0121 07:00:42.737444 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerStarted","Data":"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda"} Jan 21 07:00:42 crc kubenswrapper[4913]: I0121 07:00:42.769066 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9757r" podStartSLOduration=2.197128654 podStartE2EDuration="4.769036847s" podCreationTimestamp="2026-01-21 07:00:38 +0000 UTC" firstStartedPulling="2026-01-21 07:00:39.714920686 +0000 UTC m=+1529.511280389" lastFinishedPulling="2026-01-21 07:00:42.286828879 +0000 UTC m=+1532.083188582" observedRunningTime="2026-01-21 07:00:42.763733763 +0000 UTC m=+1532.560093456" watchObservedRunningTime="2026-01-21 07:00:42.769036847 +0000 UTC m=+1532.565396560" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.220570 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.223140 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.227222 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.386555 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.386670 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.386716 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488132 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488223 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488265 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488711 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.488743 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.512565 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"community-operators-smpb9\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.548614 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:44 crc kubenswrapper[4913]: I0121 07:00:44.819659 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:44 crc kubenswrapper[4913]: W0121 07:00:44.823479 4913 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf61318_ded2_4361_a2d3_cec7aeb2d44e.slice/crio-9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309 WatchSource:0}: Error finding container 9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309: Status 404 returned error can't find the container with id 9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309 Jan 21 07:00:45 crc kubenswrapper[4913]: I0121 07:00:45.764806 4913 generic.go:334] "Generic (PLEG): container finished" podID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" exitCode=0 Jan 21 07:00:45 crc kubenswrapper[4913]: I0121 07:00:45.764908 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce"} Jan 21 07:00:45 crc kubenswrapper[4913]: I0121 07:00:45.765191 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerStarted","Data":"9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309"} Jan 21 07:00:47 crc kubenswrapper[4913]: I0121 07:00:47.526429 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:47 crc kubenswrapper[4913]: E0121 07:00:47.526831 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.376304 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.376620 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.782435 4913 generic.go:334] "Generic (PLEG): container finished" podID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" exitCode=0 Jan 21 07:00:48 crc kubenswrapper[4913]: I0121 07:00:48.782490 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148"} Jan 21 07:00:49 crc kubenswrapper[4913]: I0121 07:00:49.440641 4913 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9757r" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" probeResult="failure" output=< Jan 21 07:00:49 crc kubenswrapper[4913]: timeout: failed to connect service ":50051" within 1s Jan 21 07:00:49 crc kubenswrapper[4913]: > Jan 21 07:00:50 crc kubenswrapper[4913]: I0121 07:00:50.798176 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerStarted","Data":"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840"} Jan 21 07:00:50 crc kubenswrapper[4913]: I0121 07:00:50.818923 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smpb9" podStartSLOduration=2.786692871 podStartE2EDuration="6.818903745s" podCreationTimestamp="2026-01-21 07:00:44 +0000 UTC" firstStartedPulling="2026-01-21 07:00:45.767322477 +0000 UTC m=+1535.563682150" lastFinishedPulling="2026-01-21 07:00:49.799533321 +0000 UTC m=+1539.595893024" observedRunningTime="2026-01-21 07:00:50.814196996 +0000 UTC m=+1540.610556659" watchObservedRunningTime="2026-01-21 07:00:50.818903745 +0000 UTC m=+1540.615263418" Jan 21 07:00:53 crc kubenswrapper[4913]: I0121 07:00:53.824689 4913 generic.go:334] "Generic (PLEG): container finished" podID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" exitCode=0 Jan 21 07:00:53 crc kubenswrapper[4913]: I0121 07:00:53.824785 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ffklg/must-gather-ftkd9" event={"ID":"0860e960-b47d-4cea-9c37-d28691c9a4d9","Type":"ContainerDied","Data":"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60"} Jan 21 07:00:53 crc kubenswrapper[4913]: I0121 07:00:53.825750 4913 scope.go:117] "RemoveContainer" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.549519 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.549573 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.562747 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ffklg_must-gather-ftkd9_0860e960-b47d-4cea-9c37-d28691c9a4d9/gather/0.log" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.608934 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.898825 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:54 crc kubenswrapper[4913]: I0121 07:00:54.956308 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:56 crc kubenswrapper[4913]: I0121 07:00:56.844894 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-smpb9" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" containerID="cri-o://9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" gracePeriod=2 Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.202391 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.362195 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") pod \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.362334 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") pod \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.362374 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") pod \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\" (UID: \"3bf61318-ded2-4361-a2d3-cec7aeb2d44e\") " Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.363491 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities" (OuterVolumeSpecName: "utilities") pod "3bf61318-ded2-4361-a2d3-cec7aeb2d44e" (UID: "3bf61318-ded2-4361-a2d3-cec7aeb2d44e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.369800 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql" (OuterVolumeSpecName: "kube-api-access-wfjql") pod "3bf61318-ded2-4361-a2d3-cec7aeb2d44e" (UID: "3bf61318-ded2-4361-a2d3-cec7aeb2d44e"). InnerVolumeSpecName "kube-api-access-wfjql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.416504 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf61318-ded2-4361-a2d3-cec7aeb2d44e" (UID: "3bf61318-ded2-4361-a2d3-cec7aeb2d44e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.464207 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjql\" (UniqueName: \"kubernetes.io/projected/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-kube-api-access-wfjql\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.464273 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.464298 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf61318-ded2-4361-a2d3-cec7aeb2d44e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852134 4913 generic.go:334] "Generic (PLEG): container finished" podID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" exitCode=0 Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852559 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840"} Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852626 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smpb9" event={"ID":"3bf61318-ded2-4361-a2d3-cec7aeb2d44e","Type":"ContainerDied","Data":"9768768adae7c525977d82db30f3819b6da5b29263d0ca0cf5afc969b041e309"} Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852649 4913 scope.go:117] "RemoveContainer" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.852790 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smpb9" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.884979 4913 scope.go:117] "RemoveContainer" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.911689 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.917063 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-smpb9"] Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.936889 4913 scope.go:117] "RemoveContainer" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.965875 4913 scope.go:117] "RemoveContainer" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" Jan 21 07:00:57 crc kubenswrapper[4913]: E0121 07:00:57.966440 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840\": container with ID starting with 9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840 not found: ID does not exist" containerID="9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.966519 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840"} err="failed to get container status \"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840\": rpc error: code = NotFound desc = could not find container \"9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840\": container with ID starting with 9a7b24e79f617d29346b83d8814e546e17e8af0ad7c1204da178b92648d79840 not found: ID does not exist" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.966557 4913 scope.go:117] "RemoveContainer" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" Jan 21 07:00:57 crc kubenswrapper[4913]: E0121 07:00:57.966988 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148\": container with ID starting with 9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148 not found: ID does not exist" containerID="9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.967020 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148"} err="failed to get container status \"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148\": rpc error: code = NotFound desc = could not find container \"9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148\": container with ID starting with 9f4d0c5a30f6169e76fa7535eba24ddd066d7ba9004ffed1f422c20580d1b148 not found: ID does not exist" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.967042 4913 scope.go:117] "RemoveContainer" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" Jan 21 07:00:57 crc kubenswrapper[4913]: E0121 07:00:57.967912 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce\": container with ID starting with 48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce not found: ID does not exist" containerID="48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce" Jan 21 07:00:57 crc kubenswrapper[4913]: I0121 07:00:57.967957 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce"} err="failed to get container status \"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce\": rpc error: code = NotFound desc = could not find container \"48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce\": container with ID starting with 48f9596bf42eaef9f8ab3b9642ed442b18fca6d92f0fa7f3b1c8ca42abe95bce not found: ID does not exist" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.458713 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.512663 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.526738 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:00:58 crc kubenswrapper[4913]: E0121 07:00:58.527012 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:00:58 crc kubenswrapper[4913]: I0121 07:00:58.537295 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" path="/var/lib/kubelet/pods/3bf61318-ded2-4361-a2d3-cec7aeb2d44e/volumes" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.257501 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.260083 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9757r" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" containerID="cri-o://ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" gracePeriod=2 Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.701032 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.812963 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") pod \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.813308 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") pod \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.813362 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") pod \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\" (UID: \"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10\") " Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.814627 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities" (OuterVolumeSpecName: "utilities") pod "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" (UID: "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.821123 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl" (OuterVolumeSpecName: "kube-api-access-5bxcl") pod "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" (UID: "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10"). InnerVolumeSpecName "kube-api-access-5bxcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.877702 4913 generic.go:334] "Generic (PLEG): container finished" podID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" exitCode=0 Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.877917 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda"} Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.878011 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9757r" event={"ID":"20a5243a-5f72-42e7-80b4-dfc0d5ef5f10","Type":"ContainerDied","Data":"44ae31c06ae6bf3ba2a16e82bee60b8ea3b82de96c830a79f05c92b64bfbd570"} Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.878082 4913 scope.go:117] "RemoveContainer" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.878156 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9757r" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.900606 4913 scope.go:117] "RemoveContainer" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.915466 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.915831 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bxcl\" (UniqueName: \"kubernetes.io/projected/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-kube-api-access-5bxcl\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.921274 4913 scope.go:117] "RemoveContainer" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.930439 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" (UID: "20a5243a-5f72-42e7-80b4-dfc0d5ef5f10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.943021 4913 scope.go:117] "RemoveContainer" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" Jan 21 07:01:00 crc kubenswrapper[4913]: E0121 07:01:00.944007 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda\": container with ID starting with ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda not found: ID does not exist" containerID="ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944050 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda"} err="failed to get container status \"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda\": rpc error: code = NotFound desc = could not find container \"ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda\": container with ID starting with ff0c0e8c6d44ebc3730b81ac53e29aea45146b332f6069bbf912ffbe7c47aeda not found: ID does not exist" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944075 4913 scope.go:117] "RemoveContainer" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" Jan 21 07:01:00 crc kubenswrapper[4913]: E0121 07:01:00.944510 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18\": container with ID starting with 878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18 not found: ID does not exist" containerID="878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944547 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18"} err="failed to get container status \"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18\": rpc error: code = NotFound desc = could not find container \"878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18\": container with ID starting with 878a68f324489d2be7e77777f56e9e91a234f863bec9abec6fa161a0eb2e3c18 not found: ID does not exist" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.944572 4913 scope.go:117] "RemoveContainer" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" Jan 21 07:01:00 crc kubenswrapper[4913]: E0121 07:01:00.945566 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761\": container with ID starting with 2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761 not found: ID does not exist" containerID="2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761" Jan 21 07:01:00 crc kubenswrapper[4913]: I0121 07:01:00.945620 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761"} err="failed to get container status \"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761\": rpc error: code = NotFound desc = could not find container \"2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761\": container with ID starting with 2c4496986439d4bded5b26cdd2ed778cd3aaa30456232af9e6b60448e6e8a761 not found: ID does not exist" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.022979 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.023434 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ffklg/must-gather-ftkd9" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" containerID="cri-o://db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" gracePeriod=2 Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.024098 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.027669 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ffklg/must-gather-ftkd9"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.233920 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.238302 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9757r"] Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.316295 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ffklg_must-gather-ftkd9_0860e960-b47d-4cea-9c37-d28691c9a4d9/copy/0.log" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.316803 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.327852 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") pod \"0860e960-b47d-4cea-9c37-d28691c9a4d9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.327970 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") pod \"0860e960-b47d-4cea-9c37-d28691c9a4d9\" (UID: \"0860e960-b47d-4cea-9c37-d28691c9a4d9\") " Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.332809 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r" (OuterVolumeSpecName: "kube-api-access-tc58r") pod "0860e960-b47d-4cea-9c37-d28691c9a4d9" (UID: "0860e960-b47d-4cea-9c37-d28691c9a4d9"). InnerVolumeSpecName "kube-api-access-tc58r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.404622 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0860e960-b47d-4cea-9c37-d28691c9a4d9" (UID: "0860e960-b47d-4cea-9c37-d28691c9a4d9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.429163 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc58r\" (UniqueName: \"kubernetes.io/projected/0860e960-b47d-4cea-9c37-d28691c9a4d9-kube-api-access-tc58r\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.429230 4913 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0860e960-b47d-4cea-9c37-d28691c9a4d9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.884753 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ffklg_must-gather-ftkd9_0860e960-b47d-4cea-9c37-d28691c9a4d9/copy/0.log" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.885466 4913 generic.go:334] "Generic (PLEG): container finished" podID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" exitCode=143 Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.885497 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ffklg/must-gather-ftkd9" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.885522 4913 scope.go:117] "RemoveContainer" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.903650 4913 scope.go:117] "RemoveContainer" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.937499 4913 scope.go:117] "RemoveContainer" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" Jan 21 07:01:01 crc kubenswrapper[4913]: E0121 07:01:01.937969 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c\": container with ID starting with db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c not found: ID does not exist" containerID="db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.938012 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c"} err="failed to get container status \"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c\": rpc error: code = NotFound desc = could not find container \"db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c\": container with ID starting with db293b65c95243964f35965f824fca4f5217cc8be3395e5507cd7e426ae54f2c not found: ID does not exist" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.938036 4913 scope.go:117] "RemoveContainer" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:01:01 crc kubenswrapper[4913]: E0121 07:01:01.938356 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60\": container with ID starting with 7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60 not found: ID does not exist" containerID="7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60" Jan 21 07:01:01 crc kubenswrapper[4913]: I0121 07:01:01.938422 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60"} err="failed to get container status \"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60\": rpc error: code = NotFound desc = could not find container \"7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60\": container with ID starting with 7aa92789dc74e867ff6f3909e968d1eddfc4c8fd9276af0f84f5f5da021b6c60 not found: ID does not exist" Jan 21 07:01:02 crc kubenswrapper[4913]: I0121 07:01:02.538727 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" path="/var/lib/kubelet/pods/0860e960-b47d-4cea-9c37-d28691c9a4d9/volumes" Jan 21 07:01:02 crc kubenswrapper[4913]: I0121 07:01:02.541165 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" path="/var/lib/kubelet/pods/20a5243a-5f72-42e7-80b4-dfc0d5ef5f10/volumes" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.105531 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106429 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106447 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106460 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106469 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106487 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106496 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106511 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106519 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106531 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106538 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="extract-content" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106553 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106561 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106576 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106584 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="extract-utilities" Jan 21 07:01:11 crc kubenswrapper[4913]: E0121 07:01:11.106619 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="gather" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106627 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="gather" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106745 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="gather" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106757 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a5243a-5f72-42e7-80b4-dfc0d5ef5f10" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106773 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf61318-ded2-4361-a2d3-cec7aeb2d44e" containerName="registry-server" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.106783 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="0860e960-b47d-4cea-9c37-d28691c9a4d9" containerName="copy" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.107706 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.130392 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.276369 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.276453 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.276520 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.377715 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.377816 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.377871 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.378562 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.378560 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.399425 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"redhat-marketplace-cwzz9\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.437001 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.840990 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:11 crc kubenswrapper[4913]: I0121 07:01:11.954804 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerStarted","Data":"c197ea729d234b260070af02e3d362e9f19e17a47868a2c5004ab193639c96af"} Jan 21 07:01:12 crc kubenswrapper[4913]: I0121 07:01:12.527238 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:12 crc kubenswrapper[4913]: E0121 07:01:12.527703 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:12 crc kubenswrapper[4913]: I0121 07:01:12.964093 4913 generic.go:334] "Generic (PLEG): container finished" podID="32ecbe88-f107-4b8b-b311-046170e29680" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" exitCode=0 Jan 21 07:01:12 crc kubenswrapper[4913]: I0121 07:01:12.964170 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09"} Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.431041 4913 scope.go:117] "RemoveContainer" containerID="86d4e11b0de675ad1a6115e20c1cf3adbad2b6fdffe182ff3c8662834b65ee6c" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.475015 4913 scope.go:117] "RemoveContainer" containerID="5c53ff64038d2e9a173936a2a3562e00a97dd728a9ffb4ca06369e4d82394063" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.511210 4913 scope.go:117] "RemoveContainer" containerID="2fbf80cb1c97a1b30187af9913da5a773063cae1392a250c64b697455ea04db1" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.572853 4913 scope.go:117] "RemoveContainer" containerID="432a97f97a1e748db15a8c859dcaa7de8838a131f61c83acbb060114eb9ecddf" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.597641 4913 scope.go:117] "RemoveContainer" containerID="9eca8ae0460adba99832950743728508ef374a3fcd006d3227af45af81c4c272" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.621990 4913 scope.go:117] "RemoveContainer" containerID="e1ea5f6140029683c8a222d0bbd83ba86ea14378a3c9af4d524c0c94971d8e7a" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.645281 4913 scope.go:117] "RemoveContainer" containerID="f6cac025b126b4c0c411e321cbd1813b091d8ec15b54ab9a538e93d26406b363" Jan 21 07:01:13 crc kubenswrapper[4913]: E0121 07:01:13.947813 4913 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ecbe88_f107_4b8b_b311_046170e29680.slice/crio-conmon-a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8.scope\": RecentStats: unable to find data in memory cache]" Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.973344 4913 generic.go:334] "Generic (PLEG): container finished" podID="32ecbe88-f107-4b8b-b311-046170e29680" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" exitCode=0 Jan 21 07:01:13 crc kubenswrapper[4913]: I0121 07:01:13.973393 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8"} Jan 21 07:01:14 crc kubenswrapper[4913]: I0121 07:01:14.981731 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerStarted","Data":"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476"} Jan 21 07:01:15 crc kubenswrapper[4913]: I0121 07:01:15.002346 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cwzz9" podStartSLOduration=2.580253192 podStartE2EDuration="4.002331805s" podCreationTimestamp="2026-01-21 07:01:11 +0000 UTC" firstStartedPulling="2026-01-21 07:01:12.966121024 +0000 UTC m=+1562.762480727" lastFinishedPulling="2026-01-21 07:01:14.388199667 +0000 UTC m=+1564.184559340" observedRunningTime="2026-01-21 07:01:14.999868072 +0000 UTC m=+1564.796227765" watchObservedRunningTime="2026-01-21 07:01:15.002331805 +0000 UTC m=+1564.798691478" Jan 21 07:01:21 crc kubenswrapper[4913]: I0121 07:01:21.437942 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:21 crc kubenswrapper[4913]: I0121 07:01:21.438355 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:21 crc kubenswrapper[4913]: I0121 07:01:21.505253 4913 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:22 crc kubenswrapper[4913]: I0121 07:01:22.091754 4913 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:22 crc kubenswrapper[4913]: I0121 07:01:22.149982 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:24 crc kubenswrapper[4913]: I0121 07:01:24.039940 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cwzz9" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" containerID="cri-o://a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" gracePeriod=2 Jan 21 07:01:24 crc kubenswrapper[4913]: I0121 07:01:24.526792 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:24 crc kubenswrapper[4913]: E0121 07:01:24.527118 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.020609 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.049907 4913 generic.go:334] "Generic (PLEG): container finished" podID="32ecbe88-f107-4b8b-b311-046170e29680" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" exitCode=0 Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.049954 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476"} Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.050339 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cwzz9" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.051159 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cwzz9" event={"ID":"32ecbe88-f107-4b8b-b311-046170e29680","Type":"ContainerDied","Data":"c197ea729d234b260070af02e3d362e9f19e17a47868a2c5004ab193639c96af"} Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.051188 4913 scope.go:117] "RemoveContainer" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.073048 4913 scope.go:117] "RemoveContainer" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.092905 4913 scope.go:117] "RemoveContainer" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.114930 4913 scope.go:117] "RemoveContainer" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" Jan 21 07:01:25 crc kubenswrapper[4913]: E0121 07:01:25.115811 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476\": container with ID starting with a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476 not found: ID does not exist" containerID="a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.115889 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476"} err="failed to get container status \"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476\": rpc error: code = NotFound desc = could not find container \"a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476\": container with ID starting with a93dad5d7808e986b934afc8a2f5837273f607a2a96172cac73c149ef2f3c476 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.115924 4913 scope.go:117] "RemoveContainer" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" Jan 21 07:01:25 crc kubenswrapper[4913]: E0121 07:01:25.116722 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8\": container with ID starting with a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8 not found: ID does not exist" containerID="a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.116754 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8"} err="failed to get container status \"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8\": rpc error: code = NotFound desc = could not find container \"a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8\": container with ID starting with a4f3e0f1aac7094846260707d7a8b7b7ce32564f680bbb0ca4dec6a2fabe09c8 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.116772 4913 scope.go:117] "RemoveContainer" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" Jan 21 07:01:25 crc kubenswrapper[4913]: E0121 07:01:25.117259 4913 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09\": container with ID starting with 9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09 not found: ID does not exist" containerID="9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.117309 4913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09"} err="failed to get container status \"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09\": rpc error: code = NotFound desc = could not find container \"9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09\": container with ID starting with 9a532570d073d98235996bb41804708fc8f7a6494bde18663e69936fa03dfa09 not found: ID does not exist" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.165091 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") pod \"32ecbe88-f107-4b8b-b311-046170e29680\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.165316 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") pod \"32ecbe88-f107-4b8b-b311-046170e29680\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.165359 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") pod \"32ecbe88-f107-4b8b-b311-046170e29680\" (UID: \"32ecbe88-f107-4b8b-b311-046170e29680\") " Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.166902 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities" (OuterVolumeSpecName: "utilities") pod "32ecbe88-f107-4b8b-b311-046170e29680" (UID: "32ecbe88-f107-4b8b-b311-046170e29680"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.172599 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh" (OuterVolumeSpecName: "kube-api-access-xjqlh") pod "32ecbe88-f107-4b8b-b311-046170e29680" (UID: "32ecbe88-f107-4b8b-b311-046170e29680"). InnerVolumeSpecName "kube-api-access-xjqlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.203930 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ecbe88-f107-4b8b-b311-046170e29680" (UID: "32ecbe88-f107-4b8b-b311-046170e29680"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.267210 4913 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.267255 4913 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ecbe88-f107-4b8b-b311-046170e29680-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.267274 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjqlh\" (UniqueName: \"kubernetes.io/projected/32ecbe88-f107-4b8b-b311-046170e29680-kube-api-access-xjqlh\") on node \"crc\" DevicePath \"\"" Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.386212 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:25 crc kubenswrapper[4913]: I0121 07:01:25.393295 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cwzz9"] Jan 21 07:01:26 crc kubenswrapper[4913]: I0121 07:01:26.537224 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ecbe88-f107-4b8b-b311-046170e29680" path="/var/lib/kubelet/pods/32ecbe88-f107-4b8b-b311-046170e29680/volumes" Jan 21 07:01:36 crc kubenswrapper[4913]: I0121 07:01:36.526970 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:36 crc kubenswrapper[4913]: E0121 07:01:36.528027 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:48 crc kubenswrapper[4913]: I0121 07:01:48.526421 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:01:48 crc kubenswrapper[4913]: E0121 07:01:48.527437 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.841488 4913 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:01:58 crc kubenswrapper[4913]: E0121 07:01:58.842465 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-utilities" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842487 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-utilities" Jan 21 07:01:58 crc kubenswrapper[4913]: E0121 07:01:58.842509 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842519 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" Jan 21 07:01:58 crc kubenswrapper[4913]: E0121 07:01:58.842543 4913 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-content" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842554 4913 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="extract-content" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.842730 4913 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ecbe88-f107-4b8b-b311-046170e29680" containerName="registry-server" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.843531 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.846185 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5jnbm"/"kube-root-ca.crt" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.847059 4913 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5jnbm"/"openshift-service-ca.crt" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.865580 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.865872 4913 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.904161 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.966664 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.966757 4913 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.967131 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:58 crc kubenswrapper[4913]: I0121 07:01:58.986283 4913 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"must-gather-96p7d\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:59 crc kubenswrapper[4913]: I0121 07:01:59.164457 4913 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:01:59 crc kubenswrapper[4913]: I0121 07:01:59.435224 4913 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.317352 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerStarted","Data":"89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e"} Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.317681 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerStarted","Data":"7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2"} Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.317691 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerStarted","Data":"7f29b4e5874bc3b2f2cda0ec1c031c1c12eaaed7301c933bd23dc7304cbe535f"} Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.342533 4913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5jnbm/must-gather-96p7d" podStartSLOduration=2.342509416 podStartE2EDuration="2.342509416s" podCreationTimestamp="2026-01-21 07:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 07:02:00.341342176 +0000 UTC m=+1610.137701849" watchObservedRunningTime="2026-01-21 07:02:00.342509416 +0000 UTC m=+1610.138869109" Jan 21 07:02:00 crc kubenswrapper[4913]: I0121 07:02:00.531208 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:00 crc kubenswrapper[4913]: E0121 07:02:00.531422 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.700020 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.707527 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.721105 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 07:02:08 crc kubenswrapper[4913]: I0121 07:02:08.755647 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.583373 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.595667 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.604292 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.615369 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.626820 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.634320 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.643481 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.650943 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.683939 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.697208 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.868809 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 07:02:09 crc kubenswrapper[4913]: I0121 07:02:09.875158 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 07:02:13 crc kubenswrapper[4913]: I0121 07:02:13.526176 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:13 crc kubenswrapper[4913]: E0121 07:02:13.526871 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:16 crc kubenswrapper[4913]: I0121 07:02:16.461549 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 07:02:16 crc kubenswrapper[4913]: I0121 07:02:16.495337 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 07:02:16 crc kubenswrapper[4913]: I0121 07:02:16.508223 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.094439 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.101573 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.115221 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.141465 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.526338 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:28 crc kubenswrapper[4913]: E0121 07:02:28.526609 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.584084 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.594258 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.597766 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.604677 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.615763 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.625161 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.632869 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.638306 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.669899 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.679988 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.793767 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 07:02:28 crc kubenswrapper[4913]: I0121 07:02:28.808402 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 07:02:38 crc kubenswrapper[4913]: I0121 07:02:38.895413 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/extract/0.log" Jan 21 07:02:38 crc kubenswrapper[4913]: I0121 07:02:38.904239 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/util/0.log" Jan 21 07:02:38 crc kubenswrapper[4913]: I0121 07:02:38.914564 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc66dqg_6bd2ad61-8bab-42d9-a09c-cf48255cc25c/pull/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.219167 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/registry-server/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.223416 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-utilities/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.230294 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rp8wd_a8ba24ca-c946-4684-817a-0ae5bada3ecd/extract-content/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.565434 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/registry-server/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.572910 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-utilities/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.581754 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tmk45_e81adc58-27d6-4087-9902-6e61aba9bfaa/extract-content/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.597930 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mmmzm_9850b956-f0a1-4e29-b5c2-703b0aa7b697/marketplace-operator/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.670966 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/registry-server/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.675739 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-utilities/0.log" Jan 21 07:02:39 crc kubenswrapper[4913]: I0121 07:02:39.684402 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-frpd4_b6d83360-7a65-47b3-98df-42902962da8d/extract-content/0.log" Jan 21 07:02:40 crc kubenswrapper[4913]: I0121 07:02:40.028231 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/registry-server/0.log" Jan 21 07:02:40 crc kubenswrapper[4913]: I0121 07:02:40.037730 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-utilities/0.log" Jan 21 07:02:40 crc kubenswrapper[4913]: I0121 07:02:40.042830 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pp6lf_4fd9a0ea-0344-4e90-87f0-34a568804f80/extract-content/0.log" Jan 21 07:02:43 crc kubenswrapper[4913]: I0121 07:02:43.526703 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:43 crc kubenswrapper[4913]: E0121 07:02:43.527205 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:56 crc kubenswrapper[4913]: I0121 07:02:56.526388 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:02:56 crc kubenswrapper[4913]: E0121 07:02:56.529434 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.862837 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/controller/0.log" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.868465 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-pc8gk_f59b1ff9-32cd-4fa1-916b-02dd65f8f75c/kube-rbac-proxy/0.log" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.881479 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-mnxc8_a33768cf-18ec-4cec-94fb-303b0779eb59/frr-k8s-webhook-server/0.log" Jan 21 07:02:59 crc kubenswrapper[4913]: I0121 07:02:59.905497 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/controller/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.329255 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.340463 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/reloader/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.358696 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/frr-metrics/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.367262 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.377341 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/kube-rbac-proxy-frr/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.385026 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-frr-files/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.397912 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-reloader/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.403400 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-zwvdk_1f6668fc-0d01-4942-abbe-758690c86480/cp-metrics/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.429150 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-69fc59f99b-jzt7r_e89d9462-a010-4873-9a7a-ff85114b35f9/manager/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.439176 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6879b6b49c-65nv9_09278577-df56-4906-b822-79df291100ae/webhook-server/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.552877 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/speaker/0.log" Jan 21 07:03:00 crc kubenswrapper[4913]: I0121 07:03:00.559425 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qpr6d_fda16a07-5908-4736-9835-a29ce1f85a7e/kube-rbac-proxy/0.log" Jan 21 07:03:01 crc kubenswrapper[4913]: I0121 07:03:01.327284 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5gzt8_6cdf7744-1629-46a4-b176-0fc75c149a95/control-plane-machine-set-operator/0.log" Jan 21 07:03:01 crc kubenswrapper[4913]: I0121 07:03:01.346724 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/kube-rbac-proxy/0.log" Jan 21 07:03:01 crc kubenswrapper[4913]: I0121 07:03:01.360200 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5fgwx_c207fbab-618a-4c01-8450-cb7ffad0f50d/machine-api-operator/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.406202 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/kube-multus-additional-cni-plugins/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.414278 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/egress-router-binary-copy/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.420825 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/cni-plugins/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.430475 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/bond-cni-plugin/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.435830 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/routeoverride-cni/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.442944 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni-bincopy/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.448932 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-2lxrr_0e8f223b-fd76-4720-a29f-cb89654e33f5/whereabouts-cni/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.462416 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/multus-admission-controller/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.466725 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-l6rtq_e70bbe19-3e5b-4629-b9bf-3c6fc8072836/kube-rbac-proxy/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.519093 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/3.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.534120 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gn6lz_b6d9e7c1-346e-45ff-a7ce-4efd2f835dcf/kube-multus/2.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.556174 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/network-metrics-daemon/0.log" Jan 21 07:03:02 crc kubenswrapper[4913]: I0121 07:03:02.562359 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-wfcsc_60ed8982-ee20-4330-861f-61509c39bbe7/kube-rbac-proxy/0.log" Jan 21 07:03:08 crc kubenswrapper[4913]: I0121 07:03:08.526440 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:08 crc kubenswrapper[4913]: E0121 07:03:08.526926 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:03:23 crc kubenswrapper[4913]: I0121 07:03:23.526931 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:23 crc kubenswrapper[4913]: E0121 07:03:23.528044 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:03:34 crc kubenswrapper[4913]: I0121 07:03:34.533468 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:34 crc kubenswrapper[4913]: E0121 07:03:34.534963 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:03:49 crc kubenswrapper[4913]: I0121 07:03:49.526282 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:03:49 crc kubenswrapper[4913]: E0121 07:03:49.527151 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:04 crc kubenswrapper[4913]: I0121 07:04:04.526402 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:04 crc kubenswrapper[4913]: E0121 07:04:04.527297 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:16 crc kubenswrapper[4913]: I0121 07:04:16.527127 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:16 crc kubenswrapper[4913]: E0121 07:04:16.528161 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:27 crc kubenswrapper[4913]: I0121 07:04:27.526223 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:27 crc kubenswrapper[4913]: E0121 07:04:27.527010 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:42 crc kubenswrapper[4913]: I0121 07:04:42.526297 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:42 crc kubenswrapper[4913]: E0121 07:04:42.527456 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:04:56 crc kubenswrapper[4913]: I0121 07:04:56.526769 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:04:56 crc kubenswrapper[4913]: E0121 07:04:56.527822 4913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sqswg_openshift-machine-config-operator(941d5e91-9bf3-44dc-be69-629cb2516e7c)\"" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" Jan 21 07:05:09 crc kubenswrapper[4913]: I0121 07:05:09.527086 4913 scope.go:117] "RemoveContainer" containerID="71543965bb65eaaf861aabcae48b3d5741c4698ac7e7893723c65b100538aa64" Jan 21 07:05:11 crc kubenswrapper[4913]: I0121 07:05:11.078111 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" event={"ID":"941d5e91-9bf3-44dc-be69-629cb2516e7c","Type":"ContainerStarted","Data":"89ea175ca019332f032d60279f93f0918d80e130a13065c463949a31cc71e982"} Jan 21 07:07:08 crc kubenswrapper[4913]: I0121 07:07:08.935827 4913 generic.go:334] "Generic (PLEG): container finished" podID="a42cc95b-6480-4433-9ad8-112d8e53faff" containerID="7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2" exitCode=0 Jan 21 07:07:08 crc kubenswrapper[4913]: I0121 07:07:08.935906 4913 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5jnbm/must-gather-96p7d" event={"ID":"a42cc95b-6480-4433-9ad8-112d8e53faff","Type":"ContainerDied","Data":"7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2"} Jan 21 07:07:08 crc kubenswrapper[4913]: I0121 07:07:08.937314 4913 scope.go:117] "RemoveContainer" containerID="7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2" Jan 21 07:07:09 crc kubenswrapper[4913]: I0121 07:07:09.277649 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/gather/0.log" Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.856909 4913 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.857799 4913 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5jnbm/must-gather-96p7d" podUID="a42cc95b-6480-4433-9ad8-112d8e53faff" containerName="copy" containerID="cri-o://89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e" gracePeriod=2 Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.862571 4913 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5jnbm/must-gather-96p7d"] Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.988625 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/copy/0.log" Jan 21 07:07:15 crc kubenswrapper[4913]: I0121 07:07:15.989902 4913 generic.go:334] "Generic (PLEG): container finished" podID="a42cc95b-6480-4433-9ad8-112d8e53faff" containerID="89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e" exitCode=143 Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.185338 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/copy/0.log" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.186100 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.303116 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") pod \"a42cc95b-6480-4433-9ad8-112d8e53faff\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.303245 4913 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") pod \"a42cc95b-6480-4433-9ad8-112d8e53faff\" (UID: \"a42cc95b-6480-4433-9ad8-112d8e53faff\") " Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.309132 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss" (OuterVolumeSpecName: "kube-api-access-l6fss") pod "a42cc95b-6480-4433-9ad8-112d8e53faff" (UID: "a42cc95b-6480-4433-9ad8-112d8e53faff"). InnerVolumeSpecName "kube-api-access-l6fss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.372855 4913 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a42cc95b-6480-4433-9ad8-112d8e53faff" (UID: "a42cc95b-6480-4433-9ad8-112d8e53faff"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.404128 4913 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a42cc95b-6480-4433-9ad8-112d8e53faff-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.404179 4913 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6fss\" (UniqueName: \"kubernetes.io/projected/a42cc95b-6480-4433-9ad8-112d8e53faff-kube-api-access-l6fss\") on node \"crc\" DevicePath \"\"" Jan 21 07:07:16 crc kubenswrapper[4913]: I0121 07:07:16.535099 4913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42cc95b-6480-4433-9ad8-112d8e53faff" path="/var/lib/kubelet/pods/a42cc95b-6480-4433-9ad8-112d8e53faff/volumes" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.010695 4913 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5jnbm_must-gather-96p7d_a42cc95b-6480-4433-9ad8-112d8e53faff/copy/0.log" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.011285 4913 scope.go:117] "RemoveContainer" containerID="89b922a579843f94dde47e41360be3db7362db1133978e47557ded141cd5692e" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.011444 4913 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5jnbm/must-gather-96p7d" Jan 21 07:07:17 crc kubenswrapper[4913]: I0121 07:07:17.039377 4913 scope.go:117] "RemoveContainer" containerID="7e20d750e2020a95c169976d3c2fcfaf8eeae887d04e42620318c20aed05c9b2" Jan 21 07:07:38 crc kubenswrapper[4913]: I0121 07:07:38.319691 4913 patch_prober.go:28] interesting pod/machine-config-daemon-sqswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 07:07:38 crc kubenswrapper[4913]: I0121 07:07:38.320418 4913 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sqswg" podUID="941d5e91-9bf3-44dc-be69-629cb2516e7c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"